<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Isabel Dahlgren</title><description>Isabel Dahlgren</description><link>https://isabeldahlgren.github.io/</link><language>en</language><copyright>Copyright 2026, Ronalds Vilcins</copyright><lastBuildDate>Sun, 29 Mar 2026 00:00:00 +0000</lastBuildDate><generator>Hugo - gohugo.io</generator><docs>http://cyber.harvard.edu/rss/rss.html</docs><atom:link href="https://ronaldsvilcins.com/atom.xml" rel="self" type="application/atom+xml"/><item><title>AI for mathematics</title><link>https://isabeldahlgren.github.io/ai-for-mathematics/</link><description>&lt;p>Over the last few months, AIs have helped solve several open problems within mathematics. This February, Epoch added a set of &lt;a href="https://epochai.substack.com/p/frontiermath-open-problems-aletheia">open problems&lt;/a> within FrontierMath, their benchmark for mathematical research abilities. One of these problems &amp;ndash; a Ramsey-style problem on hypergraphs &amp;ndash; &lt;a href="https://epochai.substack.com/p/first-ai-solution-on-frontiermath">was recently solved by AIs&lt;/a>: researchers creamed out solutions from Gemini 3.1 Pro, GPT-5.4 (xhigh), and Opus 4.6 (max). And by now (late March 2026), &lt;a href="https://www.theatlantic.com/technology/2026/02/ai-math-terrance-tao/686107/">around 50 of the odd Erdös problems have been solved with the aid of AI tools&lt;/a>. Amazingly, &lt;a href="https://mathstodon.xyz/@tao/115855840223258103">some problems&lt;/a> were largely solved autonomously, without any human in the loop.&lt;/p>
&lt;h3 id="state-of-the-art">State of the art &lt;a href="#state-of-the-art" class="hash">#&lt;/a>&lt;/h3>
&lt;p>AIs have a number of strengths relative to humans. They excel at cracking competition-style problems &amp;ndash; self-contained problems which involve a small set of techniques &amp;ndash; and &lt;a href="https://deepmind.google/blog/ai-solves-imo-problems-at-silver-medal-level/">have done so for years&lt;/a>. They&amp;rsquo;re also good at analysing large sets of data and generating hypotheses. For example, &lt;a href="https://math.stackexchange.com/questions/1112485/how-did-gauss-discover-the-prime-number-theorem">Gauss made long tables of $\int_2^x (\log t)^{-1} \ dt$&lt;/a> before conjecturing the Prime Number Theorem; such tasks seem amenable to AI automation. AIs are also good at exhaustive case analysis. Some results, such as the &lt;a href="https://en.wikipedia.org/wiki/Four_color_theorem">Four Colour Theorem&lt;/a>, are difficult to prove using other approaches.&lt;/p>
&lt;p>However, there are still areas where humans seem to outperform machines. While I&amp;rsquo;ve only used ordinary LLMs, rather than dedicated research tools like &lt;a href="https://openai.com/index/introducing-prism/">Prism&lt;/a>, I&amp;rsquo;ll share some of my experiences.&lt;/p>
&lt;p>Ordinary LLMs sometimes struggle to produce rigorous arguments, arguments where every step needs to be correct. The terminology &amp;lsquo;AI slop&amp;rsquo; is fitting: unless they&amp;rsquo;re coupled with a proof verification engine, LLMs can be sloppy. For niche topics, I also find LLMs to be frustratingly inaccurate. LLMs often flood the chat with irrelevant information; good supervisors have an uncanny ability to focus on what matters. Experts give you the right reference, figure out the actual crux, ask the right questions.&lt;/p>
&lt;p>This said, I imagine these weaknesses &amp;ndash; the lack of rigour and general &lt;a href="https://colah.github.io/notes/taste/">research taste&lt;/a> &amp;ndash; seem fixable with the right scaffolding. Overall, it seems quite plausible to me (≥ 60%) that AIs could automate most aspects of mathematical research within 10-20 years.&lt;/p>
&lt;h3 id="the-future-of-mathematics">The future of mathematics &lt;a href="#the-future-of-mathematics" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Regardless of whether full automation is possible, it&amp;rsquo;s worth reflecting on how AI will transform the field of mathematics.&lt;/p>
&lt;p>Terry Tao helped co-found the &lt;a href="https://sair.foundation">Foundation for Science and AI Research (SAIR)&lt;/a> with the partial aim of exploring this question, and he has &lt;a href="https://arxiv.org/abs/2603.26524">many interesting ideas&lt;/a> on this topic&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>.&lt;/p>
&lt;p>First, as Terry Tao points out in the &lt;a href="https://www.dwarkesh.com/p/terence-tao">Dwarkesh interview&lt;/a>, the use of AIs shifts the bottleneck to verifying arguments rather than coming up with ideas&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>. Hence, I suspect proof verification software like Lean will play a larger role in the future of mathematics. In the most extreme scenario (we like the extremal principle, no?), mathematical research might reduce to operating AIs: have AIs generate 1 000 hypotheses, make other AIs double-check the reasoning, then employ your paper-writing AIs, iterate.&lt;/p>
&lt;p>Second, the ease of generating new ideas significantly lowers the entry barrier for doing advanced mathematics: if you can obtain a proof from an LLM and have another AI system check it, you&amp;rsquo;ve effectively produced a proof. You might prove something without understanding what you&amp;rsquo;re doing&lt;sup id="fnref:3">&lt;a href="#fn:3" class="footnote-ref" role="doc-noteref">3&lt;/a>&lt;/sup>.&lt;/p>
&lt;h3 id="conclusions">Conclusions &lt;a href="#conclusions" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Needless to say, AIs will play an important role in the future of mathematics, though it&amp;rsquo;s unclear how things will play out exactly.&lt;/p>
&lt;p>I find it worth monitoring AI progress in mathematics for several reasons. Compared with more practical sciences like biology or chemistry, maths lends itself more easily to AI-driven progress. The nature of the subject, with results being either true or false, also enables efficient AI training; softer subjects give less RL signal. Moreover, using AIs to solve mathematical problems is also a problem that appeals to AI developers &amp;ndash; many of whom are nerd-sniped maths graduates &amp;ndash; so there&amp;rsquo;s a lot of effort going into AI for maths. In some sense, the &amp;lsquo;AI for science&amp;rsquo; movement seems upper-bounded by the ability of AIs to do maths research.&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>In &lt;a href="https://terrytao.wordpress.com/2026/03/23/local-bernstein-theory-and-lower-bounds-for-lebesgue-constants/">this post&lt;/a>, he describes his use of AlphaEvolve to establish an integral bound. He writes: &amp;lsquo;Quite possibly AI tools would also have been able to assist with these steps [some intermediary bounds], but they were not necessary here; their main value for me was in quickly confirming that the approach I had in mind was numerically plausible, and in recognizing the right technique to solve one part of the toy problem I had isolated.&amp;rsquo;&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>To some extent, this is already the case &amp;ndash; the review process in mathematics is notoriously slow.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:3">
&lt;p>You could prove results without insight before even LLMs, thanks to software like Lean. Case in point: the &lt;a href="https://xenaproject.wordpress.com/2020/12/05/liquid-tensor-experiment/">liquid tensor experiment&lt;/a>. However, AIs facilitate the generation of such proofs.&amp;#160;&lt;a href="#fnref:3" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/ai-for-mathematics/</guid><pubDate>Sun, 29 Mar 2026 00:00:00 +0000</pubDate></item><item><title>Anki your life</title><link>https://isabeldahlgren.github.io/anki-your-life/</link><description>&lt;p>I&amp;rsquo;m rather distracted, forgetful and impulsive &amp;ndash; just as most humans. People use different gadgets to focus on what matters: bullet journals, to do-lists and pocket-sized notebooks. I&amp;rsquo;m currently exploring an alternative: Anki.&lt;/p>
&lt;p>Anki has many use cases beyond cramming for exams. We use our memory all the time, and whatever is memory-dependent is potentially ankifiable. I&amp;rsquo;ve been experimenting with ways to use Anki for procedural knowledge, and for the formation of mental habits specifically.&lt;/p>
&lt;h3 id="examples">Examples &lt;a href="#examples" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Here are some examples of how others have used Anki to &amp;lsquo;develop a certain mentality&amp;rsquo;. I know this is a pretty vague goal, but I hope the examples will clarify.&lt;/p>
&lt;p>To build a more accurate &lt;a href="https://www.lesswrong.com/w/world-modeling">world model&lt;/a>, you could e.g. use Anki to memorise geography facts. My friend &lt;a href="https://lifeofleif.substack.com">Erik&lt;/a> leads a group &lt;a href="https://docs.google.com/document/d/1ZX9ke2dqImAxr3rX-sRC5YOHbdPdbsss-K4zegjXmZ0/edit?tab=t.myyqqi9xz1ns#heading=h.inediu9gie1y">ankifying statistics from Our World in Data&lt;/a>. If you want to make Fermi estimates on the fly, you need some data stored in your head.&lt;/p>
&lt;p>Similarly, an aspiring superforecaster might memorise forecasting tips and tricks. In &lt;em>Superforecasting&lt;/em>, Philip Tetlock characterises the forecasting methods of superforecasters:&lt;/p>
&lt;blockquote>
&lt;p>In their methods of forecasting they [superforecasters] tend to be:&lt;/p>
&lt;ul>
&lt;li>Pragmatic: Not wedded to any idea or agenda&lt;/li>
&lt;li>Analytical: Capable of stepping back from the tip-of-your-nose perspective and considering other views&lt;/li>
&lt;li>Dragon-eyed: Value diverse views and synthesize them into their own&lt;/li>
&lt;li>Probabilistic: Judge using many grades of maybe&lt;/li>
&lt;li>Thoughtful updaters: When facts change, they change their minds&lt;/li>
&lt;li>Good intuitive psychologists: Aware of the value of checking thinking for cognitive and emotional biases&lt;/li>
&lt;/ul>
&lt;/blockquote>
&lt;p>One way of translating this to Anki cards is by having a &lt;a href="https://www.lesswrong.com/posts/7Q7DPSk4iGFJd8DRk/an-opinionated-guide-to-using-anki-correctly#How_to_keep_cards_short___Levels">top-level card&lt;/a> with front &amp;lsquo;Methods of superforecasters&amp;rsquo; and back &amp;lsquo;PAD ProT Go&amp;rsquo; (some acronym which is memorable to me), possibly adding cards decoding the acronym (e.g. front: &amp;lsquo;PAD Prot Go - first P&amp;rsquo;; back: &amp;lsquo;Pragmatic&amp;rsquo;).&lt;/p>
&lt;p>Apparently &lt;a href="https://ankiweb.net/shared/decks?search=bible">people (presumably Christians) have ankified parts of the Bible&lt;/a> too, as a way to build character.&lt;/p>
&lt;h3 id="why-would-this-work">Why would this work? &lt;a href="#why-would-this-work" class="hash">#&lt;/a>&lt;/h3>
&lt;p>I find Anki quite effective for building mental habits. Here&amp;rsquo;s my pop psych theory as to why.&lt;/p>
&lt;p>Anki review sessions serve as periodic reminders. For example, suppose I want to make more data-driven judgements. Reviewing my Anki deck becomes like an affirmation of faith: each time I open the app, it&amp;rsquo;s like saying: &amp;lsquo;I&amp;rsquo;m doing this so I can make decisions grounded in actual data&amp;rsquo;. I find spaced repetition practise pretty boring, and I guess my subconscious motivational talk helps reinforce the habit.&lt;/p>
&lt;p>And when it comes to adopting a certain mentality &amp;ndash; being data-driven, making accurate forecasts or, well, being a good Christian &amp;ndash; the main bottleneck might be forgetfulness. If only it occurred to you to compare against that baseline, to give probabilistic intervals rather than yes-no answers, or to think of the fifth commandment&amp;hellip;&lt;/p>
&lt;p>Maybe our schooling system is also supporting evidence that Anki can be used to build mental habits. I assume the ultimate goal of education is to instill certain mental habits. Memorisation is a key part of the humanist education program, tried and tested since the 1500s.&lt;/p>
&lt;h3 id="conclusions">Conclusions &lt;a href="#conclusions" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Erik&amp;rsquo;s project is called &amp;lsquo;Anki The World&amp;rsquo; &amp;ndash; I suppose a natural summary of this post is &amp;lsquo;Anki Your Life&amp;rsquo;. Both students and non-students have a lot to gain from building a consistent spaced repetition habit.&lt;/p></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/anki-your-life/</guid><pubDate>Sun, 22 Mar 2026 00:00:00 +0000</pubDate></item><item><title>A semester paper on harmonic analysis</title><link>https://isabeldahlgren.github.io/a-semester-paper-on-harmonic-analysis/</link><description>&lt;p>Can you recover a function $f$ from its absolute value $|f|$? Recovering $f$ exactly is impossible: if $\lambda \neq 1$ is a complex scalar with modulus $1$, then $|f| = |\lambda f|$ although $f \neq \lambda f$. Thus, one rather asks whether $f$ can be recovered up to multiplication by a unimodular scalar. If this is possible for arbitrary $f$ in a subspace $E \subset L^2(\mathbb R)$, we say $E$ does &lt;em>phase retrieval&lt;/em>.&lt;/p>
&lt;p>Further, we can ask whether the recovery can be done in a controlled way &amp;ndash; stably. Given a subspace $E \subset L^2(\mathbb{R})$ and $f, g \in E$, is there a constant $C > 0$ such that&lt;/p>
$$\inf_{|\lambda| = 1} \|f - \lambda g\|_2 \le C \||f| - |g|\|_2?$$&lt;p>If so, we say $E$ does &lt;em>stable phase retrieval&lt;/em>.&lt;/p>
&lt;h3 id="a-cute-question">A cute question&amp;hellip; &lt;a href="#a-cute-question" class="hash">#&lt;/a>&lt;/h3>
&lt;p>The question of stable phase retrieval is easy to state but hard to answer, much like combinatorics questions. In fact, I wrote a 40-page semester paper about aspects of this question; stable phase retrieval a highly active research field within harmonic analysis. This blog post is intended as a mathematical apéritif, so you know which bits to read.&lt;/p>
&lt;p>The semester paper is split in two parts: Part I surveys prior work; Part II is a modest attempt to extend prior work. We drop assumptions, simplify arguments and modify proofs for other problems.&lt;/p>
&lt;p>My co-supervisor, Dr. Mitchell Taylor, who authored the 2022 paper on the first infinite-dimensional subspaces of complex-valued $L^2(\mathbb{R})$ doing stable phase retrieval, shared ideas on the extensions presented in Part I. These are the ideas developed in Part II.&lt;/p>
&lt;p>I would like to thank both my supervisors, Dr. Mitchell Taylor and Prof. Dr. Alessio Figalli for making this project possible. I also want to express my sincere gratitude to Dr. Taylor for valuable discussions &amp;ndash; both on the mathematical content and on the research process itself.&lt;/p>
&lt;h3 id="part-i">Part I &lt;a href="#part-i" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Part I kicks off with some preliminaries from Fourier series and Banach lattices. Lacunary Fourier series are Fourier series with sparsity conditions on the frequencies appearing in the Fourier expansion. Apparently you can say a lot about their $L^p([0, 1])$ norms &amp;ndash; for instance, they&amp;rsquo;re all equivalent. The statement of Nazarov&amp;rsquo;s inequality, which concerns another kind of &amp;lsquo;sparse&amp;rsquo; Fourier series, was nowhere to be found except in his doctoral thesis written in Russian, and is another highlight from this subsection. We also develop some Banach lattice theory. Banach lattices are Banach spaces with a lattice structure which makes them the &amp;lsquo;right&amp;rsquo; setting for studying stable phase retrieval.&lt;/p>
&lt;p>Then we outline the problem of stable phase retrieval, supplying many examples and non-examples of subspaces doing stable phase retrieval. My favourite example of a subspace failing stable phase retrieval is the linear span of characteristic functions on $[0, 1]$ and $[2, 3]$; in my notation, that&amp;rsquo;s $\langle 1_{[0, 1]}, 1_{[2, 3]}\rangle$. To see why this subspace fails stable phase retrieval, notice that $u := 1_{[0, 1]} + 1_{[2, 3]}$ and $v := 1_{[0, 1]} - 1_{[2, 3]}$ have the same modulus, though there&amp;rsquo;s no $\lambda$ on the unit circle such that $u = \lambda v$.&lt;/p>
&lt;p>We then proceed to discuss some striking results about stable phase retrieval in Banach lattices, justifying the earlier slogan about Banach lattices being good for phase retrieval. Just to wet your appetite: subspaces of Banach lattices doing stable phase retrieval admit an easy characterisation, and for finite-dimensional subspaces of Banach lattices, one can show that phase retrieval is automatically stable.&lt;/p>
&lt;p>The final section of part one focuses on the first constructions of infinite-dimensional subspaces of $L^2(\mathbb{R})$ doing stable phase retrieval from 2022. We present two results, the first due to R. Calderbank, I. Daubechies, D. Freeman and N. Freeman (subspaces of real-valued $L^2(\mathbb{R})$); the second due to M. Christ, B. Pineau, T. Oikberg and M. Taylor (subspaces of complex-valued $L^2(\mathbb{R})$). The first construction involves a construction with orthonormal iid random variables and characteristic functions, while the second construction involves bases satisfying four well-chosen conditions.&lt;/p>
&lt;h3 id="part-ii">Part II &lt;a href="#part-ii" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Part two is a modest attempt to extend the work on infinite-dimensional subspaces of $L^2(\mathbb R)$ doing stable phase retrieval.&lt;/p>
&lt;p>First, we describe one way of weakening the conditions in the Christ-Pineau-Taylor result. One of the four conditions says that a set of functions should be orthogonal. However, because stable phase retrieval is an approximate condition (we want a &amp;lsquo;$\le C$&amp;rsquo;, not a &amp;lsquo;$=$&amp;rsquo;), and orthogonality is a condition of exact equality, we try relaxing the orthogonality to something like &amp;lsquo;almost orthogonality&amp;rsquo;. This leads one to consider so-called Riesz bases, which, intuitively, are &amp;lsquo;almost orthogonal&amp;rsquo; sequences. We argue that the same proof goes through for Riesz bases too.&lt;/p>
&lt;p>Second, we simplify an argument in the Christ-Pineau-Taylor paper, using a reduction suggested by a subsequent paper. Namely, one can show that orthogonal vectors constitute a kind of worst-case scenario for stable phase retrieval: if functions $f$ and $g$ break the stable phase retrieval inequality, then you can extract a pair of orthogonal vectors $f'$ and $g'$ violating the same inequality. So, by contraposition, it&amp;rsquo;s enough showing arbitrary orthogonal vectors from your subspace $E$ satisfy the stable phase retrieval inequality to conclude that the $E$ does stable phase retrieval as a whole.&lt;/p>
&lt;p>Third, we adapt the proof of Calderbank-Daubechies-Freeman-Freeman to the setting of Pauli-stable phase retrieval. A subspace $E \subset L^2([0, 1])$ does Pauli-stable phase retrieval if, for $f, g \in E \subset L^2([0, 1])$, we have&lt;/p>
$$\inf_{|\lambda| = 1} \|f - \lambda g\|_2 \le C (\||f| - |g|\|_2 + \||\hat f| - |\hat g|\|_{\ell^2}),$$&lt;p>where the Fourier transform of an $L^2([0, 1])$ function is taken to be its Fourier series (which is in $\ell^2$, says Parseval).&lt;/p>
&lt;p>In the report, we build subspaces from sine functions where the Fourier transforms play the same role as the characteristic functions in the original paper. Adapting their proof, then, leads to a result about infinite-dimensional subspaces of real-valued $L^2([0, 1])$ doing Pauli-stable phase retrieval.&lt;/p>
&lt;p>The main difficulty in this proof was finding a workaround for independence &amp;ndash; the original construction involves iid random variables, but our building blocks are sine functions. To have the functions in our subspace satisfy the same kinds of bounds, one can place sparsity conditions on the Fourier frequencies. And yes &amp;ndash; this is where Nazarov&amp;rsquo;s inequality comes in.&lt;/p>
&lt;p>This should be enough for an apéritif. I invite you to have a closer look at the &lt;a href="https://drive.google.com/file/d/1XR_h0FKFxKp8B9Q80-jBvp2iax3D4Pl0/view?usp=sharing">report&lt;/a> &amp;ndash; the actual buffet. I had plenty of fun working on this project, and I hope this comes through in the full report.&lt;/p></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/a-semester-paper-on-harmonic-analysis/</guid><pubDate>Sun, 15 Mar 2026 00:00:00 +0000</pubDate></item><item><title>Thinking outside the timeline</title><link>https://isabeldahlgren.github.io/thinking-outside-the-timeline/</link><description>&lt;p>How should we conceive of time? The question admits no easy answer, so different cultures have developed different time conceptions. The nature of time is something of a &lt;a href="https://iep.utm.edu/time/">philosophical rabbit hole&lt;/a> &amp;ndash; I&amp;rsquo;ll try staying away from scholarly debates on time, instead focusing on the way ordinary people &amp;lsquo;feel&amp;rsquo; time. What picture does the word &amp;rsquo;time&amp;rsquo; elicit?&lt;/p>
&lt;p>In the West today, most people conceive of time as linear: time moves in a straight line, just like an arrow through space. Hence we represent time as lines &amp;ndash; timelines.&lt;/p>
&lt;p>In discourse on AI, believers in the transformative impact of AI (myself included) seem to assume a special conception of time. I think AI will lead to rapid technological progress and economic growth within the next few decades, and this could &amp;ndash; though not necessarily &amp;ndash; lead to accelerated social and technological progress.&lt;/p>
&lt;p>Will MacAskill and Finn Moorehouse introduced the &lt;a href="https://www.forethought.org/research/preparing-for-the-intelligence-explosion">thought experiment of the accelerated decade&lt;/a> to make the potential consequences of AI more visceral: imagine all scientific, technological, social and political progress from 1925-2025 occurring in the period 1935 &amp;ndash; AI might catalyse the same kind of dizzying progress in the next decade. Implicit in their thought experiment is an event-driven conception of time: something happens around 2020-2040 (an intelligence explosion), which compresses the rest of the timeline.&lt;/p>
&lt;p>My conception of time has influenced as series of big life decisions, so I wanted to inventory common time conceptions out there. As we shall see, our linear conception of time is a fairly recent phenomenon, and alternatives exist.&lt;/p>
&lt;h3 id="some-time-conceptions">Some time conceptions &lt;a href="#some-time-conceptions" class="hash">#&lt;/a>&lt;/h3>
&lt;p>For most of Western history, the notion of circular time dominated. In a circular time conception, our lives &amp;ndash; and history more broadly &amp;ndash; are viewed as a succession of repeated events, e.g. the four seasons or periods of war and peace. Plato introduced a circular time conception in &lt;em>Timaeus&lt;/em>, linking time to planetary orbits&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>. The idea of reincarnation, implicit in Eastern thought traditions, also suggests a circular view of time.&lt;/p>
&lt;p>By the late 19th century, a linear time conception took hold. Emily Thomas gives&lt;a href="https://aeon.co/essays/when-we-turned-time-into-a-line-we-reimagined-past-and-future"> four explanations&lt;/a> to this shift in perspective.&lt;/p>
&lt;p>These are all fun facts, so I&amp;rsquo;ll quickly rehash her views. First, there was the creation of time tables and timelines &amp;ndash; if you want to visualise historical events, you can&amp;rsquo;t use circles &amp;ndash; you have to use tables or lines. Second, there was Darwin&amp;rsquo;s theory of evolution (Darwin is everywhere). Third, she points to the development of chronophotography, a photographic technique which captures movements as a series of snapshots. Finally, physics: Charles Hinton developed a physical theory of time as a fourth dimension. From this perspective, then, time really was a line.&lt;/p>
&lt;p>But &lt;a href="https://bigthink.com/thinking/a-brief-history-of-linear-time/">other explanations are possible&lt;/a>. You could also highlight the influence of Christianity. In the Bible, there&amp;rsquo;s a beginning and an end: Jenesis and Judgement Day. Time moves from beginning to end &amp;ndash; time is linear.&lt;/p>
&lt;p>As a special case of linear time, we can also speak of &amp;lsquo;binary time&amp;rsquo;&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>, where you chop time into two distinct epochs. For example, some environmentalists distinguish between the pre- and post-anthropocentric era. Likewise, if you believe AGI could be huge, you might think in terms of pre- and post-AGI eras. Or, for a communist, the revolution marks a turning point in history.&lt;/p>
&lt;p>Alternatives to linear and circular time exist. For example, Parmenides and Zeno viewed time as an illusion. Kant &lt;a href="https://plato.stanford.edu/entries/kant-spacetime/">held a similar view&lt;/a>, it seems. Or you could just be agnostic about your view of time. From &lt;em>Four Quartets&lt;/em>:&lt;/p>
&lt;blockquote>
&lt;p>Time present and time past
Are both perhaps present in time future,
And time future contained in time past.&lt;/p>
&lt;/blockquote>
&lt;p>Maybe one should just admit defeat?&lt;/p>
&lt;h3 id="conclusions">Conclusions &lt;a href="#conclusions" class="hash">#&lt;/a>&lt;/h3>
&lt;p>So much so for our overview of time conceptions. One conclusion emerges from these examples: your time conception is closely connected to your worldview. In a secularised society, a proxy question for &amp;lsquo;Do you believe in some God?&amp;rsquo; might be &amp;lsquo;How do you view time?&amp;rsquo; Sometimes you want to think outside the line.&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>: This said, there was a lot of confusion regarding time in Ancient Greece; Parmenides and Zeno, for example, viewed time as an illusion.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>There&amp;rsquo;s surely a better, more scholarly term for this.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/thinking-outside-the-timeline/</guid><pubDate>Sun, 08 Mar 2026 00:00:00 +0000</pubDate></item><item><title>Is AI safety the new climate change?</title><link>https://isabeldahlgren.github.io/is-ai-safety-the-new-climate-change/</link><description>&lt;p>Ever since Eliezer Yudkowsky and Nick Bostrom first drew our attention to the risks of superintelligent AI, &lt;a href="https://www.alignmentforum.org/posts/v6Q7T335KCMxujhZu/clarifying-what-failure-looks-like#Historical_precedents">people&lt;/a> have likened AI safety to climate change. Here, I&amp;rsquo;m referring to the overall goals of developing safe AI and transitioning to a net zero economy &amp;ndash; both the technical and social dimensions.&lt;/p>
&lt;p>Since the publication of &lt;em>Superintelligence&lt;/em> (2014), AI safety has matured as a field. It&amp;rsquo;s no longer a fringe cause area of effective altruism or a topic confined to rationalist forums online. Whether for good or bad, AI safety is increasingly becoming like climate change in a number of ways.&lt;/p>
&lt;p>Perhaps most strikingly, AI safety and climate change debates center around a point of no return. AI safety researchers speak of a technological singularity, the point where AIs start to recursively self-improve and we see AI progress taking off; climate change scientists speak of tipping points, temperature thresholds where additional warming triggers irreversible changes.&lt;/p>
&lt;p>Similarly, within both AI and climate change, there are so-called &amp;lsquo;doomers&amp;rsquo;, i.e. people who think we&amp;rsquo;re bound for some kind of apocalypse. In AI, the archetypal doomer is Eliezer Yudkowsky, who seems to think &lt;a href="https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities">AGI will inevitably kill everyone&lt;/a> (or see the title of his book). Likewise, many AI safety researchers think catastrophic outcomes are highly likely in the absence of regulations. For instance, here&amp;rsquo;s &lt;a href="https://lironshapira.substack.com/p/max-tegmark-vs-dean-ball-debate-ban-superintelligence">Max Tegmark&lt;/a> in an interview from November 2025&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>:&lt;/p>
&lt;blockquote>
&lt;p>&amp;hellip;if we go ahead and continue having nothing like the FDA for AI, so people can legally just launch superintelligence and worry about getting sued later&amp;hellip; yeah, I would think it’s definitely over 90% that we lose control.&lt;/p>
&lt;/blockquote>
&lt;p>Climate change activists like Greta Thunberg often employ similar doom-laden language. Someone has to sound the alarm.&lt;/p>
&lt;p>And in both domains, the public has heard the alarm, as it were. By now, most people agree that we should build safe AI and solve global warming&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>. As of September 2025, &lt;a href="https://www.pewresearch.org/science/2025/09/17/views-of-ais-impact-on-society-and-human-abilities/">50%&lt;/a> of American adults were more concerned than excited about the use of AI in everyday life; in 2021, the same figure was 37%&lt;sup id="fnref:3">&lt;a href="#fn:3" class="footnote-ref" role="doc-noteref">3&lt;/a>&lt;/sup>.&lt;/p>
&lt;p>Finally, I suspect that both AI safety and climate change are primarily policy problems. By this, I mean that the bottleneck is political willpower rather than a lack of technical solutions. &lt;a href="https://80000hours.org/podcast/episodes/buck-shlegeris-ai-control-scheming/">Says&lt;/a> Buck Shleregis of Redwood Research:&lt;/p>
&lt;blockquote>
&lt;p>Five years ago I thought of misalignment risk from AIs as a really hard problem that you’d need some really galaxy-brained fundamental insights to resolve. Whereas now, to me the situation feels a lot more like we just really know a list of 40 things where, if you did them — none of which seem that hard — you’d probably be able to not have very much of your problem.&lt;/p>
&lt;/blockquote>
&lt;p>Moreover, Anthropic&amp;rsquo;s conflict with Pentagon highlight the dependence on policy-makers: AI safety is becoming a political issue. In climate change, it seems like we&amp;rsquo;re facing the same situation, with &lt;a href="https://www.theguardian.com/commentisfree/2024/oct/16/climate-crisis-technology-ai">fixes being available&lt;/a> but policy mess.&lt;/p>
&lt;p>In conclusion, AI safety is becoming more mainstream &amp;ndash; mainstream to the point of becoming more like a global coordination problem. And while there are a few differences&lt;sup id="fnref:4">&lt;a href="#fn:4" class="footnote-ref" role="doc-noteref">4&lt;/a>&lt;/sup>, discourse on AI safety and climate change bear an uncanny resemblance: the doomers speak of a doomsday, and the threat comes from big corps prioritising commercial interests over social considerations.&lt;/p>
&lt;p>AI safety has its roots in the no-nonsense culture on LessWrong, and I&amp;rsquo;m hoping we can use this to handle AI safety better than climate change&lt;sup id="fnref:5">&lt;a href="#fn:5" class="footnote-ref" role="doc-noteref">5&lt;/a>&lt;/sup>.&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>Most people in the AI safety community hold more moderate views; see e.g. &lt;a href="https://www.theguardian.com/technology/ng-interactive/2025/dec/30/the-office-block-where-ai-doomers-gather-to-predict-the-apocalypse">this article&lt;/a>. Jonas Vollmer puts the probability of AI killing us and creating a world run by AI systems at 20%, while Buck Shleregis thinks an AI takeover is 40% likely.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>Interestingly, the recognition of the AI safety problem has led to an analogue of green-washing.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:3">
&lt;p>For an overview of public opinion on AI, see &lt;a href="https://80000hours.org/podcast/episodes/eileen-yam-experts-public-artificial-intelligence-survey/#highlights">this 80k podcast episode&lt;/a>.&amp;#160;&lt;a href="#fnref:3" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:4">
&lt;p>See Sam Clarke&amp;rsquo;s discussion &lt;a href="https://www.alignmentforum.org/posts/v6Q7T335KCMxujhZu/clarifying-what-failure-looks-like#How_WFLL1_may_differ_from_precedents">here&lt;/a>. To me, the most striking difference is that the effects of transformative AI appear lesser understood. What will happen to the labour market?&amp;#160;&lt;a href="#fnref:4" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:5">
&lt;p>No, we&amp;rsquo;re &lt;a href="https://www.weforum.org/stories/2025/11/not-on-track-for-net-zero-2050-how-we-get-there/">not on track for our climate change goals&lt;/a>.&amp;#160;&lt;a href="#fnref:5" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/is-ai-safety-the-new-climate-change/</guid><pubDate>Sun, 01 Mar 2026 00:00:00 +0000</pubDate></item><item><title>Should we all be doing evals?</title><link>https://isabeldahlgren.github.io/should-we-all-be-doing-evals/</link><description>&lt;p>I try limiting my AI news intake, but I couldn&amp;rsquo;t help myself from following some of the news around OpenClaw and Moltbook. It began with the &lt;a href="https://x.com/karpathy/status/2024987174077432126">following tweet of Andrej Karpathy&lt;/a> trickling down via a Slack channel:&lt;/p>
&lt;blockquote>
&lt;p>Bought a new Mac mini to properly tinker with claws over the weekend. The apple store person told me they are selling like hotcakes and everyone is confused :)&lt;/p>
&lt;/blockquote>
&lt;p>And so went my weekend too.&lt;/p>
&lt;p>During that weekend, as I saw more OpenClaw for myself, I began reflecting more on AI progress. In brief, I&amp;rsquo;m starting to internalise the idea that we&amp;rsquo;re in the midst of an intelligence explosion, and I think this means AI safety work should center around LLM evals.&lt;/p>
&lt;h3 id="feeling-the-agi">Feeling the AGI &lt;a href="#feeling-the-agi" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Developers of ML models, whether in academia or industry, need something to aspire to &amp;ndash; a holy grail of ML models. Right now, the holy grail is often called &amp;lsquo;artificial general intelligence&amp;rsquo; (AGI).&lt;/p>
&lt;p>Here&amp;rsquo;s the original definition of AGI from an eponymous research monograph from 2007&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>:&lt;/p>
&lt;blockquote>
&lt;p>What is meant by AGI is, loosely speaking, AI systems that possess a reasonable degree of self-understanding and autonomous self-control, and have the ability to solve a variety of complex problems in a variety of contexts, and to learn to solve new problems that they didn&amp;rsquo;t know about at the time of their creation.&lt;/p>
&lt;/blockquote>
&lt;p>The definition is vague, but you could make the case Claude Opus 4.6 satisfies these conditions.&lt;/p>
&lt;p>The paper &lt;a href="https://www.agidefinition.ai">A Definition of AGI&lt;/a> from October 2025 suggests defining AGI as &amp;lsquo;an AI that can match or exceed the cognitive versatility and proficiency of a well-educated adult&amp;rsquo; and investigates the extent to which OpenAI models are AGI. We&amp;rsquo;ll stick with their definition for the rest of the post.&lt;/p>
&lt;p>The authors assigned GPT-4 and GPT-5 AGI scores of 27% and 57%, respectively. Linearly interpolating AGI score progress based on release dates brings GPT-5.2, the latest OpenAI model, to around 60%. Regrettably, they don&amp;rsquo;t benchmark Gemini or Claude models, which I assume would perform even better.&lt;/p>
&lt;p>Other benchmarks which test capabilities related to AGI, such as &lt;a href="https://arcprize.org/leaderboard">Arc-AGI-1&lt;/a>, &lt;a href="https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/">METR&amp;rsquo;s task horizon length&lt;/a>, reveal rapid progress too. While &lt;a href="https://epoch.ai/blog/a-rosetta-stone-for-ai-benchmarks">unsaturated benchmarks still exist&lt;/a>, most of them test beyond-AGI capabilities.&lt;/p>
&lt;p>While I agree the expression &amp;lsquo;beyond-AGI capabilities&amp;rsquo; sounds strange, remember that AGI isn&amp;rsquo;t the same as the holy grail of intelligence &amp;ndash; for this article, we agreed that AGI was AI with the cognitive capabilities of a well-educated adult.&lt;/p>
&lt;p>Depending on your personal disposition, then, you might feel the AGI. At least some of it.&lt;/p>
&lt;h3 id="feeling-the-recursive-self-improvement">Feeling the recursive self-improvement &lt;a href="#feeling-the-recursive-self-improvement" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Just we can all feel some AGI, we can also feel some recursive self-improvement&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>. Recursive self-improvement can happen at different levels: from training a deep network via backpropagation until AI systems running entire R&amp;amp;D cycles autonomously.&lt;/p>
&lt;p>I view OpenClaw as a milestone in the development of recursively self-improving AI systems. While OpenClaw isn&amp;rsquo;t exactly R&amp;amp;D, it reveals the power of agent scaffolding. Weaker models can form the backbone of &lt;a href="https://www.forbes.com/sites/digital-assets/2026/01/31/what-is-openclaw-and-why-it-matters-for-cryptos-next-phase/">AI systems earning money&lt;/a>, for example. And the more money you have, the more you can invest, and the more you earn. Maybe such money bots could be viewed as recursively self-improving AI systems.&lt;/p>
&lt;p>As further evidence for self-improvement, Thibault Sottiaux from OpenAI &lt;a href="https://x.com/thsottiaux/status/2018258151603388639">reports&lt;/a> that:&lt;/p>
&lt;blockquote>
&lt;p>Codex now pretty much builds itself, with the help and supervision of a great team. The bottleneck has shifted to being how fast we can help and supervise the outcome.&lt;/p>
&lt;/blockquote>
&lt;p>Or take Dario&amp;rsquo;s forecast from March 2025 that &lt;a href="https://www.lesswrong.com/posts/prSnGGAgfWtZexYLp/is-90-of-code-at-anthropic-being-written-by-ais">90% of code at Anthropic would be written by AI within 3-6 months&lt;/a>.&lt;/p>
&lt;h3 id="but-also-feeling-history">But also feeling history &lt;a href="#but-also-feeling-history" class="hash">#&lt;/a>&lt;/h3>
&lt;p>The development of AI &amp;ndash; just as any technology, really &amp;ndash; follows a circular pattern&lt;sup id="fnref:3">&lt;a href="#fn:3" class="footnote-ref" role="doc-noteref">3&lt;/a>&lt;/sup>: booms, winters, booms. AI was established as a field of study in 1955 with the Dartmouth conference; the mid 1970s saw an AI winter, and was followed by a productive 1980s, with seminal work of Geoffrey Hinton and Judea Pearl. Hinton&amp;rsquo;s paper on backpropagation for neural networks was published in 1986, for example&lt;sup id="fnref:4">&lt;a href="#fn:4" class="footnote-ref" role="doc-noteref">4&lt;/a>&lt;/sup>. From the early 1990s until 2011, there was another winter. After that came AlexNet (Ilya Sutskever&amp;rsquo;s first major research achievement), AlphaGo, attention &amp;ndash; most breakthroughs enabling modern LLMs, really. The last ten years correspond to the biggest AI boom to date, with the last four years &amp;ndash; from the release of Chat-GPT 3.5 in November 2022 &amp;ndash; arguably seeing a progress speedup.&lt;/p>
&lt;p>If you think there&amp;rsquo;ll be an intelligence explosion or a technological singularity, it&amp;rsquo;s worth placing this decade in a broader historical context &amp;ndash; look at a time scale with ticks for centuries rather than months. Then we appear to be in the midst of an intelligence explosion right now. We have some AGI &amp;ndash; a good 57%, actually, if you follow the framework of the &lt;a href="https://www.agidefinition.ai">definition of AGI paper&lt;/a> &amp;ndash; and AI systems highly capable of recursive self-improvement. However, the best piece of evidence that we&amp;rsquo;re in an intelligence explosion now might well be the &lt;a href="https://metr.org/time-horizons/">exponential law for task horizon length&lt;/a>.&lt;/p>
&lt;p>Crucially, we can start asking questions about what happened during the beginning of this intelligence explosion. Did the world become better, on balance?&lt;/p>
&lt;p>Providing well-grounded answer to this question would require a book, or at least a series of blog posts, and it&amp;rsquo;s beyond the scope of this article. But we can draw important conclusions.&lt;/p>
&lt;h3 id="conclusions">Conclusions &lt;a href="#conclusions" class="hash">#&lt;/a>&lt;/h3>
&lt;p>We seem to be in the middle of an intelligence explosion, plausibly the final prophesied Intelligence Explosion or Singularity. We already have some degree of AGI and recursive self-improvement (if this isn&amp;rsquo;t an intelligence explosion, then what is?). This means we can, and should, ask more questions about the state of the world, so we can update our threat models. To what extent is GPT-5.2 scheming? How much fine-tuning does it take removing safety guardrails in open-weight models? Can existing frontier models perform economically valuable tasks?&lt;/p>
&lt;p>The lack of AI regulation has led to a zoo of models to study. Over the last few years, AI safety has become less like natural philosophy and more like biology, involving lab work that can meaningfully inform the theory.&lt;/p>
&lt;p>I very much welcome this development. Personally, I like benchmarking papers: whatever the benchmark results, that&amp;rsquo;s valuable information&lt;sup id="fnref:5">&lt;a href="#fn:5" class="footnote-ref" role="doc-noteref">5&lt;/a>&lt;/sup>. Theoretical alignment work &amp;ndash; or even just attempts at developing new methods within empirical AI safety &amp;ndash; seem riskier. Perhaps the best way to improve humanity&amp;rsquo;s long-term future comes down to doing API calls and doing honest write-ups of the results.&lt;/p>
&lt;p>Anyway, if we want to coexist in harmony with &lt;a href="https://joecarlsmith.com/2024/01/02/gentleness-and-the-artificial-other">the second species&lt;/a>, let&amp;rsquo;s begin by understanding its evolutionary precursors. They&amp;rsquo;re already out there.&lt;/p>
&lt;p>&lt;em>Thanks to Santeri Koivula for feedback on this post.&lt;/em>&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>Reading the introduction felt much like reading a text from the 1300s &amp;ndash; the book, &lt;a href="https://link.springer.com/book/10.1007/978-3-540-68677-4">freely available via Springer&lt;/a>, is something of a historical document.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>I&amp;rsquo;m speaking of recursive self-improvement rather than &amp;lsquo;singularity&amp;rsquo; because the term &amp;lsquo;singularity&amp;rsquo; usually refers to an uncontrollable technological development.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:3">
&lt;p>Ctrl-f &amp;lsquo;AI winter&amp;rsquo; in &lt;a href="https://www.ibm.com/think/topics/history-of-artificial-intelligence">this article&lt;/a>, or just read &lt;em>Genius Makers&lt;/em>.&amp;#160;&lt;a href="#fnref:3" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:4">
&lt;p>Backpropagation was originally introduced in 1969 by Arthur Bryson and Yu-Chi Ho, but in a more general context.&amp;#160;&lt;a href="#fnref:4" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:5">
&lt;p>Thanks to my mentor &lt;a href="https://zhijing-jin.com/home/">Zhijing&lt;/a> for pointing this out.&amp;#160;&lt;a href="#fnref:5" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/should-we-all-be-doing-evals/</guid><pubDate>Sun, 22 Feb 2026 00:00:00 +0000</pubDate></item><item><title>ML algorithms for life</title><link>https://isabeldahlgren.github.io/ml-algorithms-for-life/</link><description>&lt;p>When planning, a good heuristic is to start visualising the desired outcome and recursively work out necessary steps: &amp;lsquo;I want to arrive at 8, so I need to leave at 7:30, so I need to finish breakfast at 7:10 &amp;ndash; shoot, I need to get up at 6:30&amp;rsquo;. Here are further examples: &amp;lsquo;I want to meditate consistently, so I need to find an accountability mechanism, so&amp;hellip;&amp;rsquo; or &amp;lsquo;I want to reduce X-risk, so I need to find a job related to X-risk reduction, so&amp;hellip;&amp;rsquo;&lt;/p>
&lt;p>The basic algorithm is &amp;lsquo;I want X; to do X, I need to do Y; to do Y, I need to do Z&amp;hellip;&amp;rsquo; Backpropagation, no?&lt;/p>
&lt;p>Computer science lingo is helpful for decision-making in real life, as highlighted in &lt;em>Algorithms to live by&lt;/em>. In this post, let&amp;rsquo;s extend their &amp;lsquo;computer science $\to$ life situations&amp;rsquo; mapping to the domain of ML.&lt;/p>
&lt;h3 id="kinds-of-learning">Kinds of learning &lt;a href="#kinds-of-learning" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Supervised, unsupervised and reinforcement learning have close analogues in real life. Supervised learning learning is like acing tests, where the test result is your labelled data; unsupervised learning is like understanding deeper, non-examinable patterns. Reinforcement learning, where an autonomous agent learns optimal decisions through trial-and-error, is a good metaphor for early adulthood.&lt;/p>
&lt;p>Similarly, it&amp;rsquo;s useful bearing in mind the distinction between test and validation sets. Ultimately, you want to minimise something like a generalisation error, the expected loss with data points drawn from a given distribution. This is different from your estimate of the generalisation error, the loss computed from the data points you happened to sample.&lt;/p>
$$\mathbb E_{(X, Y) \in \mathcal P}[\ell(f(X), Y)] \neq \frac{1}{N} \sum_{i=1}^N \ell(f(x_i), y_i).$$&lt;p> Don&amp;rsquo;t be too disheartened if the right-hand side is high.&lt;/p>
&lt;h3 id="optimisation-algorithms">Optimisation algorithms &lt;a href="#optimisation-algorithms" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Stochastic gradient descent, too, is a good metaphor for how we learn. At a given moment in time, you figure out the best next step, take a step in that direction, and iterate. The gradient is computed across minibatches &amp;ndash; you can&amp;rsquo;t feasibly use all data &amp;ndash; so technically speaking, you&amp;rsquo;re probably doing minibatch SGD. The learning rate gives you the magnitude of the step. Terminology to describe loss landscapes (convexity, smooth, local/global minimum) is also useful, though it&amp;rsquo;s beyond the scope of this post.&lt;/p>
&lt;p>As mentioned earlier, backpropagation also puts learning into perspective. Starting from the end goal, you work out necessary modifications to your parameters.&lt;/p>
&lt;h3 id="further-ml-concepts">Further ML concepts &lt;a href="#further-ml-concepts" class="hash">#&lt;/a>&lt;/h3>
&lt;p>So far, we&amp;rsquo;ve only discussed terminology relating to the basic ML pipeline. But there&amp;rsquo;s much more ML terminology allowing us to formulate problems from everyday life more precisely.&lt;/p>
&lt;p>The bias-variance tradeoff captures the fundamental tension between low bias and high variance. A complex model is more expressive, but produces more variable outputs. Similarly, there&amp;rsquo;s the risk of overfitting. The conventional wisdom version: Occam&amp;rsquo;s razor.&lt;/p>
&lt;p>We can also borrow jargon from label classification. Just as a model can be well-calibrated &amp;ndash; combining the right signals in the right way &amp;ndash; so can humans. If wisdom is about making the right decisions in light of available data, then the wise man is he who has high AUROCs across different classification problems. Another useful notion from ML classification is the max-margin solution: selecting the max-margin solution is like applying the precautionary principle.&lt;/p>
&lt;p>Finally, principle component analysis also lends itself to personal real-world applications. For example, a good prompt might be: &amp;lsquo;What are the principal components here?&amp;rsquo; And in this setting, the term &amp;lsquo;singular value&amp;rsquo; should perhaps be understood literally.&lt;/p>
&lt;h3 id="beyond-human-brain-analogues">Beyond human-brain analogues &lt;a href="#beyond-human-brain-analogues" class="hash">#&lt;/a>&lt;/h3>
&lt;p>The field of ML has always used real-world analogues to drive progress &amp;ndash; indeed, neural networks are modeled on human brains, and physical intuitions help with algorithm design. Conversely, thinking in ML terms might make us more clear-sighted. Maybe ordinary people can benefit from the field of ML in unexpected ways.&lt;/p>
&lt;p>&lt;em>This post was inspired by a PEAKS workshop I attended this fall, led by &lt;a href="https://mariuswenk.github.io">Marius Wenk&lt;/a>.&lt;/em>&lt;/p></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/ml-algorithms-for-life/</guid><pubDate>Sun, 15 Feb 2026 00:00:00 +0000</pubDate></item><item><title>Growing older faster</title><link>https://isabeldahlgren.github.io/growing-older-faster/</link><description>&lt;p>Every so often, I try remembering what Isabel was like one year ago. The more I cringe at younger me, the better &amp;ndash; it means that I changed, though I&amp;rsquo;m often unable to explain how. Perhaps the cringe-meter captures something which traditional end-of-year reflection prompts don&amp;rsquo;t: my cringe-meter appears to measure the extent to which I&amp;rsquo;ve internalised new ideas, whether truisms about life or theoretical frameworks. Naturally, I&amp;rsquo;d like to speed up this learning process &amp;ndash; in a sense, I&amp;rsquo;d like to grow older faster.&lt;/p>
&lt;p>In my experience, this is largely a matter of getting exposed to new ideas. I&amp;rsquo;ll list a few classic ways to get exposed to new ideas and then offer some comments.&lt;/p>
&lt;h3 id="examples">Examples &lt;a href="#examples" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Traveling might well be the definitive way to open one&amp;rsquo;s mind &amp;ndash; young people have always gone abroad to do soul searching. In the 17th century, European aristocrats often undertook &amp;lsquo;grand tours&amp;rsquo; around the age of 21; today, students are highly encouraged to do exchange programs.&lt;/p>
&lt;p>Similarly, starting a new role is should expose you to new ideas. If you spend 40h per week at work, I bet something happens with you &amp;ndash; whether you notice or not. Maybe it&amp;rsquo;s a change in attitude, values or language usage. For example, observe how quickly friends doing internships begin using business acronyms in everyday speech.&lt;/p>
&lt;p>Studies can have the same effect. As a student, you spend much more than 40h per week at &amp;lsquo;work&amp;rsquo;, and you&amp;rsquo;re surrounded by like-minded people. The university experience often entails complete immersion in some geek culture.&lt;/p>
&lt;p>Of course, the content you learn can also have a mind-opening effect. For example, doing pure mathematics makes you appreciate precision (and see the lack of it everywhere). However, I assume subjects like history, literature or economics &amp;ndash; which concern our society more directly &amp;ndash; have a bigger potential to change one&amp;rsquo;s outlook on everyday life.&lt;/p>
&lt;p>Another class of &amp;rsquo;learning experiences&amp;rsquo; come from particularly intense experiences: getting married, watching your favourite movie for the first time or having a good madeleine. Traumas, negative intense experiences, also fall under into this class.&lt;/p>
&lt;h3 id="comments">Comments &lt;a href="#comments" class="hash">#&lt;/a>&lt;/h3>
&lt;p>What can be said about these examples?&lt;/p>
&lt;p>The extent to which I internalise new ideas appears to be a function of both intensity and duration &amp;ndash; it&amp;rsquo;s the integral that matters, as it were. Not all stays abroad are formative experiences: I need to be away for sufficiently long, and the environment needs to be sufficiently different. Likewise, for work and studies, I need an element of novelty and time to parse the new information.&lt;/p>
&lt;p>But a major life change isn&amp;rsquo;t enough &amp;ndash; there&amp;rsquo;s also the need for introspection. This might well be why traveling is so effective, at least for me: being abroad puts me in a reflective holiday state of mind.&lt;/p>
&lt;p>Finally, observe that the common theme for the first three examples &amp;ndash; traveling, work and studies &amp;ndash; is novel encounters. My most meaningful learnings have come from conversations with people I trust deeply (whether dead or alive). Perhaps growing older faster is largely a matter of making new friends?&lt;/p>
&lt;p>&lt;em>This blog post was inspired by a conversation with Elias Koschier about &amp;lsquo;going OOD&amp;rsquo;.&lt;/em>&lt;/p></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/growing-older-faster/</guid><pubDate>Sun, 08 Feb 2026 00:00:00 +0000</pubDate></item><item><title>The power of network effects</title><link>https://isabeldahlgren.github.io/the-power-of-network-effects/</link><description>&lt;p>Returning to Sweden, I was struck by the differences in digital infrastructure between Switzerland and Sweden. In Zürich, you use WhatsApp, Twint and Luma; in Stockholm, Messenger, Swish and Facebook.&lt;/p>
&lt;p>One of the great boons of coming to Switzerland was being free to delete Facebook. I&amp;rsquo;d reluctantly been using Facebook to be notified of events and for Facebook Marketplace, though I often found myself doomscrolling. I knew it was affecting my wellbeing negatively &amp;ndash; I felt much better after deactivating my instagram two years prior&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup> &amp;ndash; so I tried lots of methods to minimise doomscrolling. Nothing worked (good job Zuck); getting rid of Facebook seemed like the simplest solution&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>.&lt;/p>
&lt;p>To be notified of events in Zürich, I relied on Luma, an event management platform, as well as newsletters and posters. Being a sucker for good UI, I became particularly fond of Luma. For messaging, I used WhatsApp and Signal, apps which aren&amp;rsquo;t nearly as buggy as Messenger.&lt;/p>
&lt;p>Moving to Zürich meant I got to use better software in my day-to-day, and this had a surprisingly big positive impact on my life satisfaction. In fact, I suspect we&amp;rsquo;d be be much better off if we collectively decided to ditch bad software and use the apps everyone secretly wants to use anyway&lt;sup id="fnref:3">&lt;a href="#fn:3" class="footnote-ref" role="doc-noteref">3&lt;/a>&lt;/sup>. I also began reflecting on network effects more broadly.&lt;/p>
&lt;p>When economists speak of &amp;rsquo;network&amp;rsquo; or &amp;lsquo;scaling effects&amp;rsquo;, they mean products which become more valuable as more people use them. A prime example is Facebook: people use Facebook because their friends use it, though, in reality, they might prefer alternative platforms. Markets with network effects are sometimes called &amp;lsquo;winner-takes-all markets&amp;rsquo;.&lt;/p>
&lt;p>Let&amp;rsquo;s look at a few examples of positive and negative network effects, just to make things more concrete.&lt;/p>
&lt;p>Amazon, digital payment platforms and Github are examples of good network effects. On Amazon, there&amp;rsquo;s a virtuous cycle: more buyers, more sellers, lower prices, more buyers, and so on &amp;ndash; network effect, right there. Digital payment platforms like Twint, Wise or Swish only work when your friends have it, so also leverage network effects.&lt;/p>
&lt;p>Social media, digital labor platforms and App Store are arguably examples of negative network effects. We could be using email, SMS and Discord to foster meaningful connections, but we&amp;rsquo;re stuck with general-purpose, ad-bloated platforms like Facebook and Instagram&lt;sup id="fnref:4">&lt;a href="#fn:4" class="footnote-ref" role="doc-noteref">4&lt;/a>&lt;/sup>. Network effects also lead to bad working conditions for platform workers&lt;sup id="fnref:5">&lt;a href="#fn:5" class="footnote-ref" role="doc-noteref">5&lt;/a>&lt;/sup>: food delivery drivers need to accept the conditions of digital labor platforms like Uber, Foodora or Wolt, often having to forego basic labor protections. Something similar happens on App Store: developers are forced, as it were, to sell their products on App Store and to accept Apple&amp;rsquo;s terms and conditions.&lt;/p>
&lt;p>Unfortunately, such negative network effects are hard to disrupt. The standard way to address negative network effects seems to be &lt;a href="https://www.benkuhn.net/nwe/">through regulation&lt;/a>, though &lt;a href="https://isabeldahlgren.github.io/policy-making-is-complicated/">good regulation is difficult&lt;/a>.&lt;/p>
&lt;p>Continental philosophers like statements of the form &amp;lsquo;X governs our lives&amp;rsquo;, where X could be &amp;lsquo;money&amp;rsquo; (Marx), &amp;lsquo;power&amp;rsquo; (Foucault) or &amp;lsquo;gravity&amp;rsquo; (Weil). In the digital era, network effects can become ever stronger: the network can grow to include anyone with a smartphone.&lt;/p>
&lt;p>For companies exploiting network effects: please, take care of us. For you and me in the network: today, network effects might govern our lives more than we think.&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>As soon as I could code a web-based photo portfolio.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>My friend Lukas developed an &lt;a href="https://www.google.com/url?sa=t&amp;amp;source=web&amp;amp;rct=j&amp;amp;opi=89978449&amp;amp;url=https://apps.apple.com/se/app/lorien/id6757672327&amp;amp;ved=2ahUKEwi4t8HHye2SAxUnGhAIHTSPIPAQFnoECCEQAQ&amp;amp;usg=AOvVaw0dmL8Du_Gvca864bU5Q-yq">app to prevent him from doomscrolling&lt;/a>; I wish it was around when I was still on social media.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:3">
&lt;p>In fact, a 2-min &lt;a href="https://gemini.google.com/app/238622299477c1e5">Gemini-assisted Fermi estimate&lt;/a> yields an increase in 2000 QALYs if Stockholmers switch from Facebook to Luma for event management. That corresponds to 25-80 lives saved &amp;ndash; not bad.&amp;#160;&lt;a href="#fnref:3" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:4">
&lt;p>Discord works infinitely better than traditional social media, seriously. Any functionalities of Facebook relevant to social connection are also available on Discord.&amp;#160;&lt;a href="#fnref:4" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:5">
&lt;p>See &lt;a href="https://www.hrw.org/report/2025/05/12/the-gig-trap/algorithmic-wage-and-labor-exploitation-in-platform-work-in-the-us">this report from Human Rights Watch&lt;/a>.&amp;#160;&lt;a href="#fnref:5" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/the-power-of-network-effects/</guid><pubDate>Sun, 01 Feb 2026 00:00:00 +0000</pubDate></item><item><title>My LLM stack</title><link>https://isabeldahlgren.github.io/my-llm-stack/</link><description>&lt;p>You have a lot of power at your fingertips in early 2026, with highly capable AI cheaply available. Base models like GPT-5.2 and Claude Opus 4.5 have &lt;a href="https://epoch.ai/benchmarks/gpqa-diamond">expert-level subject knowledge&lt;/a> in most STEM fields, and they can autonomously complete tasks taking &lt;a href="https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/">almost a full work day&lt;/a>. And that&amp;rsquo;s just for base models. There are also powerful agent scaffolding tools for base models &amp;ndash; take Claude Cowork, for example.&lt;/p>
&lt;p>Whatever tasks an AI can solve with your oversight, you can solve too. Many ordinary people can solve the same kinds of problems as certified therapists, experienced architects or ML PhD students with reasonable effort.&lt;/p>
&lt;p>It&amp;rsquo;s well worth learning how to use LLMs efficiently, then.&lt;/p>
&lt;p>It feels like we&amp;rsquo;re told this all the time, and Demis Hassabis eloquently made the point at the &lt;a href="https://www.youtube.com/watch?v=NnVW9epLlTM">seminar at the World Economic Forum&lt;/a> this week. Because &amp;rsquo;new year, new me&amp;rsquo;, I thought I&amp;rsquo;d listen.&lt;/p>
&lt;h3 id="using-llms-efficiently">Using LLMs efficiently &lt;a href="#using-llms-efficiently" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Using LLMs efficiently seems to involve two things: first, identifying good use cases for AI; second, using AI to implement the solution.&lt;/p>
&lt;p>Step two, execution, is normally just some kind of prompt engineering &amp;ndash; a skill I&amp;rsquo;d expect to become less important in the future: producing user-friendly products is largely about eliminating the need for sophisticated prompt engineering. Step one is harder: it requires creativity and attention to failure modes in one&amp;rsquo;s everyday life.&lt;/p>
&lt;p>To learn to use LLMs efficiently, I decided to catalogue the main ways in which I use LLMs. I&amp;rsquo;ll stress that I&amp;rsquo;m almost certainly using AI suboptimally and that these use cases are specific to my lifestyle, so my list shouldn&amp;rsquo;t be taken prescriptively.&lt;/p>
&lt;p>For proper, well-curated lists of LLM use cases, I recommend browsing through the links listed by Gavin Leech in his &lt;a href="https://www.gleech.org/llms#see-also">post on LLM usage&lt;/a>. Feel free to share your own favourite AI uses, e.g. via &lt;a href="mailto:isabel.dahlgren@gmail.com">email&lt;/a>.&lt;/p>
&lt;p>Anyway, here goes.&lt;/p>
&lt;h3 id="tried-and-tested-use-cases">Tried-and-tested use cases &lt;a href="#tried-and-tested-use-cases" class="hash">#&lt;/a>&lt;/h3>
&lt;ul>
&lt;li>I have a low barrier for asking LLMs &amp;lsquo;dumb&amp;rsquo; questions regarding individual proof steps. Similarly, I frequently use LLMs as sanity checkers, writing a sentence summarising my understanding of a topic and having an LLM give feedback.&lt;/li>
&lt;li>LLMs are excellent for explaining terminology in plain, simple language. Two caveats, though: for looking up words in foreign languages, I strongly prefer ordinary dictionaries, like those of &lt;a href="https://www.ne.se">NE&lt;/a>; I&amp;rsquo;m also skeptical of using LLMs for definitions.&lt;/li>
&lt;li>I sometimes have LLMs lecture me on specific topics in mathematics. This is particularly useful when you&amp;rsquo;re self-studying rather than following a course with peers &amp;ndash; e.g. when you&amp;rsquo;re doing a semester paper. Concretely, this looks something like dumping relevant papers, textbook chapters and lecture notes into NotebookLM and asking for a 3-page PDF summary focusing on key intuitions.&lt;/li>
&lt;li>Of course, I rely heavily on coding agents, Claude Code being my favourite. Claude can handle most programming-related tasks today &amp;ndash; even things like slurm management&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>.&lt;/li>
&lt;li>Routine email writing, like finding a time to meet.&lt;/li>
&lt;li>Transcription of my favourite &lt;em>In Our Time&lt;/em> episodes.&lt;/li>
&lt;li>Just as Andy Masley, I also use AI for &lt;a href="https://andymasley.substack.com/p/how-i-use-ai?open=false#%C2%A7clearing-ugh-fields">clearing ugh fields&lt;/a>, like household chores. It&amp;rsquo;s not that I need AI for declaring taxes; however, it lowers the activation energy for the task. Or, to cite another example in this category, I used AI to repair my desktop lamp the other day.&lt;/li>
&lt;li>Python scripts can save you a lot of time, and coding agents can whip up scripts in minutes. For example, I have scripts for creating Anki cards, for web scraping and for merging markdown files.&lt;/li>
&lt;/ul>
&lt;h3 id="experimental-use-cases">Experimental use cases &lt;a href="#experimental-use-cases" class="hash">#&lt;/a>&lt;/h3>
&lt;ul>
&lt;li>I&amp;rsquo;m experimenting with using AIs to combat planning fallacy. I recently passed a project proposal through Claude, who categorically said that I was committing planning fallacy&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>. After some back and forth with the LLM, I came away with a far better project proposal. To extend this point, perhaps I could have an AI coach me in some areas of my life?&lt;/li>
&lt;li>I&amp;rsquo;m considering taking LLM input for some major life decisions, e.g. career choices.&lt;/li>
&lt;li>Some people, like Kaj Sotala, use AI as &lt;a href="https://kajsotala.fi/2025/01/things-i-have-been-using-llms-for/">aids for introspection&lt;/a>. Maybe worth trying?&lt;/li>
&lt;/ul>
&lt;h3 id="conclusion">Conclusion &lt;a href="#conclusion" class="hash">#&lt;/a>&lt;/h3>
&lt;p>As is evident from my examples, today&amp;rsquo;s AI is powerful enough to be useful in almost any domain of life. This year, I&amp;rsquo;ll continue exploring novel AI use cases, and I expect this to improve my quality of life in some respects. However, I&amp;rsquo;m also curious to learn whether AI can actually improve one&amp;rsquo;s day-to-day, or whether the main boons from AI might come from things like accelerated scientific progress.&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>It&amp;rsquo;s easy to see why programmers have notoriously short AGI timelines.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>Interestingly, I fed other AI models the same prompt (GPT, Gemini and Perplexity) &amp;ndash; but I only received real pushback from Claude. And I want feedback that stings a bit.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/my-llm-stack/</guid><pubDate>Sun, 25 Jan 2026 00:00:00 +0000</pubDate></item><item><title>Building a CLI coding gym</title><link>https://isabeldahlgren.github.io/building-a-cli-coding-gym/</link><description>&lt;p>I attended a coding event this Wednesday, with participants hacking away on vibe coding projects and demoing their results by the end of the evening. My project was a CLI tool for developing tacit programming knowledge: Terminal Tutor. Code and docs are publicly available &lt;a href="https://github.com/isabeldahlgren/terminal-tutor">here&lt;/a>, though this post is the real README.&lt;/p>
&lt;p>I view Terminal Tutor as a coding gym &amp;ndash; a way to keep your programming muscles from atrophying in the age of autonomous coding agents. And the programming muscles you&amp;rsquo;re maintaining aren&amp;rsquo;t about syntax (&amp;quot;&lt;code>plt.set_title&lt;/code> or &lt;code>plt.title&lt;/code>&amp;quot;?); instead, you&amp;rsquo;re maintaining, or even growing, precisely the muscles that matter in the age of vibe coding.&lt;/p>
&lt;h3 id="defining-tacit-programming-knowledge">Defining tacit programming knowledge &lt;a href="#defining-tacit-programming-knowledge" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Specifically, Terminal Tutor aims to develop &lt;a href="https://www.lesswrong.com/posts/SXJGSPeQWbACveJhs/the-best-tacit-knowledge-videos-on-every-subject">tacit knowledge&lt;/a>, knowledge that&amp;rsquo;s hard to articulate in verbal instruction. To me it seems like tacit programming knowledge falls into two categories: consistently making the right design choices and terminal fluency. By terminal fluency, I mean instantly knowing which shell commands to execute, and with the correct syntax.&lt;/p>
&lt;p>The best way to develop tacit knowledge is by working alongside an expert explaining their reasoning, i.e. &lt;a href="https://www.dwarkesh.com/p/andrej-karpathy?open=false#%C2%A7future-of-education">through apprenticeship&lt;/a>. Teaching skills matter a lot here, perhaps more than subject knowledge. But perhaps you don&amp;rsquo;t have the capacity to do a coding internship &amp;ndash; the natural kind of coding apprenticeship &amp;ndash; and in this case, LLMs might be useful. LLMs are both excellent tutors and excellent programmers, so it&amp;rsquo;s natural to ask whether one can use LLMs to develop tacit programming knowledge.&lt;/p>
&lt;p>Developing a programmer&amp;rsquo;s eye for design is genuinely hard, even with LLMs. Developing terminal fluency &amp;ndash; is also hard &amp;ndash; the right bash commands are unknown unknowns. But LLMs can help develop terminal fluency, so this is why I choose to focus on terminal fluency.&lt;/p>
&lt;h3 id="why-terminal-fluency-still-matters">Why terminal fluency still matters &lt;a href="#why-terminal-fluency-still-matters" class="hash">#&lt;/a>&lt;/h3>
&lt;p>And to be clear, I believe you have good reason to develop terminal fluency, even in the age of autonomous coding agents.&lt;/p>
&lt;p>There&amp;rsquo;s still a significant gap between a senior dev and a rookie with Claude Code Max &amp;ndash; it&amp;rsquo;s not just a matter of coding the right thing rather than the wrong thing &amp;ndash; experts also code the right thing the five times faster. And without being a senior dev myself, I think terminal skills play a crucial role here: terminal skills speed up whatever happens between LLM inference calls.&lt;/p>
&lt;p>Also, knowing your terminal matters for safety: I&amp;rsquo;m more worried about someone mindlessly accepting a &lt;code>rm -rf /&lt;/code> than about coding agents like Claude Code or Codex inserting backdoors in the code, like hard-coded solutions.&lt;/p>
&lt;p>Last but not least, vibe coding is more fun when you can follow the commands executed; otherwise, it&amp;rsquo;s like watching a game of chess without knowing about common strategies. You want to vibe with the coding agent rather than being its &lt;code>sudo&lt;/code> slave.&lt;/p>
&lt;h3 id="how-terminal-tutor-works">How Terminal Tutor works &lt;a href="#how-terminal-tutor-works" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Terminal Tutor is a wrapper around Claude Code which has Claude explain all commands in detail &amp;ndash; I like it when programming teachers explain every line of code, &lt;a href="https://karpathy.ai/zero-to-hero.html">Karpathy-style&lt;/a> &amp;ndash; and log commands we&amp;rsquo;ve covered, so I can revisit them later. Under the hood, Terminal Tutor is just a CLAUDE.md file and some Python scripts to have Terminal Tutor remember mastered commands across repos.&lt;/p>
&lt;p>&lt;a href="https://github.com/isabeldahlgren/terminal-tutor">Here&amp;rsquo;s&lt;/a> the package; feel free to play around. I have ideas for future extensions (e.g. Anki integration via AnkiConnect), though more experienced programmers may have a better sense of what&amp;rsquo;s missing.&lt;/p>
&lt;p>I&amp;rsquo;ve tested Terminal Tutor for my research projects over the last week, and it works surprisingly well: Terminal Tutor appears to be as good a coding agent as normal Claude. Though commands take slightly longer to run and I have to exert more effort, I learn more while coding. Ultimately, however, using Terminal Tutor is more fun &amp;ndash; it feels good doing one&amp;rsquo;s programming reps.&lt;/p></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/building-a-cli-coding-gym/</guid><pubDate>Sun, 18 Jan 2026 00:00:00 +0000</pubDate></item><item><title>Positive psychology for rationalists</title><link>https://isabeldahlgren.github.io/positive-psychology-for-rationalists/</link><description>&lt;p>Books like &lt;em>Factfulness&lt;/em> (2018) and &lt;em>Enlightenment now&lt;/em> (2018) were very favourably received, partly because high-profile people were writing about the right things at the right time. More specifically, I suspect that these books largely owe their success to their feelgood component. Recall 2018: we had the Cambridge Analytica scandal, the Thunberg climate strikes, the murder of Jamal Khashoggi. Who wouldn&amp;rsquo;t have wanted to hear an grandpa-like professor saying that everything would be alright?&lt;/p>
&lt;p>I recently discovered a wellbeing hack: reminding oneself of positive historical events, or positive psychology for rationalists. As I mentioned in &lt;a href="https://isabeldahlgren.github.io/the-varieties-of-optimal-experience/">an earlier post&lt;/a>, I often think about the fall of the Berlin Wall &amp;ndash; not just because history is important, but also because it makes me happy. Here are other pieces of historical trivia which similarly fill me with a sense of awe and wonder:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Convivencia happened, at least a bit&lt;/strong>: Around 800-1500 AD, Muslims, Jews and Christian communities peacefully coexisted in medieval Spain. This period is usually referred to as the &amp;lsquo;convivencia&amp;rsquo; (Spanish for &amp;rsquo;living together&amp;rsquo; or &amp;lsquo;coexistence&amp;rsquo;). It wasn&amp;rsquo;t just that they didn&amp;rsquo;t kill people with other religious beliefs &amp;ndash; there even appeared to be significant cultural exchange&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>. After listening to the &lt;em>In Our Time&lt;/em> &lt;a href="https://www.bbc.co.uk/programmes/p00548l1">episode about Muslim Spain&lt;/a>, I came away feeling very optimistic about human nature.&lt;/li>
&lt;li>&lt;strong>Dissolution of Soviet Union&lt;/strong>: One could argue that the USSR dissolved without bloodshed: the USSR didn&amp;rsquo;t end after a treaty following a bloody third world war. Though you might object that the Cold War wasn&amp;rsquo;t bloodless, we could have had a nuclear war. Gorbachev was rather friendly, at least in comparison with the current Russian ruler, and he could readily have chosen to start a war when realising that the union was collapsing&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup> &amp;ndash; this would have been his most straightforward way of achieving immortality through history.&lt;/li>
&lt;li>&lt;strong>Queen Christina doing her thing&lt;/strong>: I&amp;rsquo;ve always been fascinated by Queen Christina of Sweden. Today, she has become something of a queer icon: she was a tomboy, preferring masculine hobbies and wearing men&amp;rsquo;s clothes, and declared at 22 she&amp;rsquo;d never marry&lt;sup id="fnref:3">&lt;a href="#fn:3" class="footnote-ref" role="doc-noteref">3&lt;/a>&lt;/sup>&lt;sup id="fnref:4">&lt;a href="#fn:4" class="footnote-ref" role="doc-noteref">4&lt;/a>&lt;/sup>. Five years later, the daughter of Gustav II Adolf, the king who died defending Protestantism in the Thirty Years&amp;rsquo; War, abdicated, converted to Catholicism and moved to Rome. A queer queen, indeed.&lt;/li>
&lt;li>&lt;strong>Abolition of slavery&lt;/strong>: Just appreciate that slavery was abolished, since &lt;a href="https://80000hours.org/podcast/episodes/christopher-brown-slavery-abolition/">the abolition of slavery wasn&amp;rsquo;t inevitable&lt;/a>. More broadly, the abolition of slavery represents the first step big &lt;a href="https://en.wikipedia.org/wiki/Moral_circle_expansion">moral circle expansion&lt;/a>.&lt;/li>
&lt;li>&lt;strong>Eradication of smallpox&lt;/strong>: Big triumph &amp;ndash; infectious diseases have played a major role in history. Eradicating an infectious disease is notoriously hard, requiring both scientific advances and international coordination; in fact, only two diseases have been completely eradicated to date (smallpox in 1980 and rinderpest in 2011). Smallpox killed approximately 300 million people in the 20th century alone &amp;ndash; and now, no more deaths to smallpox.&lt;/li>
&lt;/ul>
&lt;p>The poster for the &lt;a href="https://www.dhm.de/en/exhibitions/archive/2022/leipzig-89-revolution-reloaded/">Leipzig &amp;lsquo;89 - Revolution Reloaded exhibition&lt;/a> of the Deutsches Historisches Museum hangs on my wall. And in my hallway, there&amp;rsquo;s a poem which (among other things) lists a number of noteworthy historical events, such as the glasnost. We must not forget history &amp;ndash; not just to prevent atrocities from repeating themselves &amp;ndash; but also to appreciate the present.&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>Historians debate whether convivencia was as utopian as it sounds.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>But he let his world &amp;ndash; the USSR &amp;ndash; end not with a bang but with a whimper.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:3">
&lt;p>Christina wrote a large number of &lt;a href="https://www.adlibris.com/sv/bok/maximer-9789127170209">maxims&lt;/a> during her lifetime. One of them: &amp;lsquo;It takes more courage to marry than to go to war&amp;rsquo;.&amp;#160;&lt;a href="#fnref:3" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:4">
&lt;p>She was learned too, mastering at least &lt;a href="https://iep.utm.edu/wasa/#:~:text=Killed%20at%20the%20battle%20of,Stoics%2C%20notably%20Epictetus%20and%20Seneca.">five foreign languages&lt;/a> in addition to her native Swedish and German. The queen was also keen on philosophy, having Descartes come to Stockholm and give her private lessons at 5am in the winter (the Frenchman soon caught a cold and died ten days later, no joke).&amp;#160;&lt;a href="#fnref:4" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/positive-psychology-for-rationalists/</guid><pubDate>Sun, 11 Jan 2026 00:00:00 +0000</pubDate></item><item><title>On co-living, fractals and life circles</title><link>https://isabeldahlgren.github.io/on-co-living-fractals-and-life-circles/</link><description>&lt;p>Now that I&amp;rsquo;m nearing the end of my studies, I&amp;rsquo;ve reflected a lot on the value of having a tight-knit community, as you always hear recent graduates say they miss the sense of community from college.&lt;/p>
&lt;p>I&amp;rsquo;m mainly interested in communities sharing most aspects of life, ranging from professional to personal &amp;ndash; the urban analogue of a &lt;a href="https://en.wikipedia.org/wiki/Kibbutz">kibbutz&lt;/a>; I&amp;rsquo;ll call such communities &amp;rsquo;life circles&amp;rsquo;. Life circles involve standing social events, often at a third space, and its members share a sense of community.&lt;/p>
&lt;p>Life circles arise naturally within some Eastern cultures, where they&amp;rsquo;re usually centered around the family unit, though you&amp;rsquo;ll also find life circles among creatives, entrepreneurs and other people &amp;lsquo;with character&amp;rsquo; in the West. Examples of life circles in the West include &lt;a href="https://joinarcadia.org">Arcadia&lt;/a>, &lt;a href="https://emergelakefront.org">Emerge Lakefront&lt;/a> and &lt;a href="https://feytopia.com">Feytopia&lt;/a>. However, a circle doesn&amp;rsquo;t need to entail co-living: &lt;a href="https://fractalnyc.com">Fractal&lt;/a> is just a housing network.&lt;/p>
&lt;p>Having a life circle gives you a professional edge. Political, artistic and scientific movements are collective endeavours, and they cannot be carried out by a single person, however smart. Famous examples of life circles include the Bloomsbury group and the Vienna Circle. Similarly, you cannot successfully run a company without having some very good friends &amp;ndash; not just coffee-machine acquaintances, but friends you can rely on no matter what. There&amp;rsquo;s a real sense, then, in which world history wasn&amp;rsquo;t shaped by individuals, but by groups of individuals.&lt;/p>
&lt;p>But above all, feeling a sense of belonging is important for overall life satisfaction, and a life circle provides just that.&lt;/p>
&lt;p>Of course &amp;ndash; you don&amp;rsquo;t need a life circle to live well: you can have different friendship groups for the various areas of your life. This creates more overhead, though, and requires you to be both agentic and good with people (I&amp;rsquo;m neither).&lt;/p>
&lt;p>Though Stockholm has among the highest proportions of one-person households in the world (a staggering &lt;a href="https://ourworldindata.org/social-connections-and-loneliness">60%&lt;/a>)&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>, there are several life circle initiatives here. The startup ecosystem finds itself in a chicken-egg relationship with life circles, and there are various innovation hubs with life circle vibes: &lt;a href="https://ahouse.se/?gad_source=1&amp;amp;gad_campaignid=21793326861&amp;amp;gbraid=0AAAAACahM7xx1yNHuwh_dKKlZ70xHr_CN&amp;amp;gclid=Cj0KCQiAvOjKBhC9ARIsAFvz5lhQHxEN6Gzcl7AzVB_r5oLElQcMJo3m1UrBqNTUc8QMryIIu7IIGSYaAno-EALw_wcB">A House&lt;/a>, &lt;a href="https://www.knackeriet.se">Knackeriet&lt;/a>, &lt;a href="https://www.thisisalma.com">Alma&lt;/a>, &lt;a href="https://www.norrsken.org/houses/stockholm">Norrsken&lt;/a>, &lt;a href="https://helio.se/sv/slottsbacken/">Slottsbacken&lt;/a>. There are also co-living communities like &lt;a href="https://emergelakefront.org">Emerge Lakefront&lt;/a>, &lt;a href="https://www.k9coliving.com">K9&lt;/a> and &lt;a href="https://www.udda.co">Udda&lt;/a> (more examples &lt;a href="https://www.visitstockholm.com/live-work/your-life-stockholm/co-living-in-stockholm-i-wish-this-was-my-starting-point/">here&lt;/a>).&lt;/p>
&lt;p>More broadly, it&amp;rsquo;s worth mentioning the student apartments and co-living spaces of Stockholms studentbostäder (SSSB), which provide natural meeting hubs for students and young professionals. There&amp;rsquo;s also a long-standing &lt;a href="https://anekdot.se/bildningspoddavsnitt/32-studiecirkeln-live/">tradition of study circles in Sweden&lt;/a>, some of which also have a life circle ring to them.&lt;/p>
&lt;p>Soon-to-be graduates find themselves somewhere between communities based around student apartments and exclusive co-working spaces. If you don&amp;rsquo;t want to live in a shared apartment, you&amp;rsquo;re not part of a life circle by default. Or even if you have access to an exclusive co-working space, perhaps you wish to be part of a more diverse community. And there&amp;rsquo;s no obvious life circle in Stockholm for such people. Problem.&lt;/p>
&lt;p>Partial solution (feel free to submit your own solutions &lt;a href="https://docs.google.com/forms/d/e/1FAIpQLScxJtKLMFqQka14yyiZbqlHpGp27A-7ei6tvMn8HaDLJVUqIg/viewform?usp=publish-editor">here&lt;/a>): as a very minimal life circle implementation, you could create a Luma calendar with friends with both standing events and casual one-offs &amp;ndash; see e.g. the &lt;a href="https://luma.com/fractalgva">calendar of Fractal Geneva&lt;/a>. I imagine people adding their own events (casual dinners, vibe-coding afternoons, museum visits), as well as events they plan on attending (seminars, plays, concerts). Subscribers of the Luma calendar will then be notified when someone adds an event and can mark whether they&amp;rsquo;ll join. In addition to the calendar, you may want group chat or a Slack for casual meet-ups.&lt;/p>
&lt;p>While you could technically do all event managing through a group chat, group chats don&amp;rsquo;t scale as well, and they don&amp;rsquo;t allow for the automatic creation of recurring events. Recurring events form an integral part of any kind of life circle: I wouldn&amp;rsquo;t count on people organising casual gatherings to keep the circle together. Oh, and you&amp;rsquo;d also be missing out on the beautiful Luma interface.&lt;/p>
&lt;p>I just created a calendar; let me know if I should add you.&lt;/p>
&lt;p>&lt;em>This post was inspired by conversations with Agatha Duzan, Elias Koschier, Chiara Gerosa, Konrad Seifert and Åke Lindblom.&lt;/em>&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>This doesn&amp;rsquo;t translate into higher rates of loneliness, though. In &lt;a href="https://ourworldindata.org/lonely-not-alone">one study&lt;/a>, self-reported rates of loneliness among older adults were the lowest in Denmark, Switzerland and Sweden, and the highest in Greece, Israel and Italy.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/on-co-living-fractals-and-life-circles/</guid><pubDate>Sun, 04 Jan 2026 00:00:00 +0000</pubDate></item><item><title>On logging and living</title><link>https://isabeldahlgren.github.io/on-logging-and-living/</link><description>&lt;p>During our family gatherings, my mother loves taking pictures on her iPhone; my father rarely takes pictures unless prompted by my mother (we need photos of her too). I find myself somewhere between: I make an effort to take pictures when seeing family and friends, though I dislike pulling up the camera.&lt;/p>
&lt;p>My parents represent two distinct camps: the log and live camps. The loggers are the people taking pictures all the time to record their lives; the livers think that picture-taking ruins the experience and take no pictures at all.&lt;/p>
&lt;p>Loggers have been around forever: keeping a diary, making realistic paintings and collecting objects are all forms of logging. Famous examples of logs include the diaries of Samuel Pepys (17th century) and Anne Frank (1944-1942). Serious photo loggers have only been around since the 1950s, when the instant camera arrived.&lt;/p>
&lt;p>However, the contrast between the logging and living mentalities is nowhere as stark as within photography, especially today: anyone with a smartphone can easily become a photo logger, so refusing to take pictures is more of a statement. While there have always been loggers, we can&amp;rsquo;t really speak of anti-loggers outside the context of photography, which is why I&amp;rsquo;ll focus on photography for this post.&lt;/p>
&lt;p>There are many kinds of loggers. The forefather of the logger is the Japanese tourist on the hop-on-hop-off bus with one Nikon on each hip. Now that anyone can take high-quality pictures with their phone, there are more loggers than ever.&lt;/p>
&lt;p>Loggers say they take pictures to remember precious moments with families and friends. Since pictures can be made more objective than e.g. diary entries or paintings they offer an easier way to share experiences. However, this doesn&amp;rsquo;t seem like the only motivation for logging &amp;ndash; this doesn&amp;rsquo;t explain why everyone needs to take their own picture of the turkey at the Christmas dinner.&lt;/p>
&lt;p>Loggers often reach for the camera when bored: if you&amp;rsquo;re fully attentive, you won&amp;rsquo;t think of taking pictures. Sometimes logging is rooted in a discomfort with doing nothing. Susan Sontag&amp;rsquo;s remark about tourists also holds for loggers:&lt;/p>
&lt;blockquote>
&lt;p>Using a camera appeases the anxiety which the work‑driven feel about not working when they are on vacation and supposed to be having fun. They have something to do that is like a friendly imitation of work: they can take pictures.&lt;/p>
&lt;/blockquote>
&lt;p>Logging is also a way to appropriate an unfamiliar situation. When taking a picture, the photographer decides on what to include and exclude, and this feels much like conquering something unknown.&lt;/p>
&lt;p>In fact, the photographer exerts tremendous power over the social situation: they can make everyone pause with what they&amp;rsquo;re doing, move their chairs and put on an awkward smile, the whole affair taking up to several minutes. During the act of photography, the photographer is the centre of attention.&lt;/p>
&lt;p>Livers, often the subjects of the loggers, are best understood as anti-loggers.&lt;/p>
&lt;p>The livers say that excessive picture-taking ruins the experience, both for the photographer and the subject. Taking good pictures is a hassle: you need to pull up your camera, compose the image, tell everyone to look natural, and redo steps two and three until satisfaction. The subjects grow tired of posing, especially if there are several loggers around &amp;ndash; as there often are, since everyone needs their own picture.&lt;/p>
&lt;p>More worryingly, however, is how loggers disrupt the social dynamics. If logging becomes the dominating social activity, then a moment is valuable only insofar as it leads to a good photo &amp;ndash; picture-taking creates a strong &lt;a href="https://en.wikipedia.org/wiki/Hawthorne_effect">Hawthorne effect&lt;/a>. In logging, the logger might also signal that they&amp;rsquo;re bored, whether it&amp;rsquo;s true or not: the subjects cannot tell whether you&amp;rsquo;re scrolling social media or checking whether your cousin blinked. Either way, you&amp;rsquo;re distancing yourself from the others.&lt;/p>
&lt;p>In its purest form, I view logging as immortalising experience, making joyful moments accessible to future selves and future generations. The fundamental tension between loggers and livers has to do with delaying pleasure: loggers prefer savouring a joyful experience for longer, while livers swallow it in one go &amp;ndash; it&amp;rsquo;s like the difference between having americano and caffè normale.&lt;/p>
&lt;p>However, I agree with livers that picture-taking should be made less invasive. Loggers can share photos among themselves, rather than having subjects pose for three different cameras; they can agree on norms for when picture-taking is allowed. When seeing friends, I only do portraits just as we&amp;rsquo;re about to say goodbye &amp;ndash; this is my way of indicating that our meeting wasn&amp;rsquo;t about producing pictures.&lt;/p>
&lt;p>A viewfinder introduces a distance. And the logger cannot close the distance by passing their phone or camera around the table and showing the pictures: this distracts from the other social activity. Everyone wants to look at the photos, for photos are attractors of attention. The photographer, then, needs to wield their power carefully.&lt;/p></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/on-logging-and-living/</guid><pubDate>Sun, 28 Dec 2025 00:00:00 +0000</pubDate></item><item><title>The varieties of optimal experience</title><link>https://isabeldahlgren.github.io/the-varieties-of-optimal-experience/</link><description>&lt;p>I often wonder what it must have been like experiencing the fall of the Berlin wall first-hand. The photographs of ordinary citizens on the wall flinging their hands into the air give me goosebumps; I bet many eye-witnesses would cite the fall of the Berlin wall as the happiest moment of their lives.&lt;/p>
&lt;p>How about the rest of us? On their deathbed, what would the average person view as the happiest moment of their life? Here I use the word &amp;lsquo;happy&amp;rsquo; to mean &amp;lsquo;joy&amp;rsquo;, rather than a sense of ataraxia, so I&amp;rsquo;m asking about moments of ecstasy.&lt;/p>
&lt;p>Although this is personality-dependent, surely there are some patterns to people&amp;rsquo;s peak experiences. Here&amp;rsquo;s an attempt at classifying them.&lt;/p>
&lt;p>One class of peak experiences have to do with achievements. The fall of the Berlin wall was a remarkable achievement; it signified a victory of freedom over totalitarianism &amp;ndash; or even &lt;em>the&lt;/em> victory of freedom over totalitarianism, for believers in the end of history. Or consider scoring the decisive penalty during the penalty shoot out in a world cup finals.&lt;/p>
&lt;p>Heureka moments also fall into the category of achievements. Einstein famously described the following realisation, at the heart of his theory of general relativity, as the happiest thought of his life:&lt;/p>
&lt;blockquote>
&lt;p>&lt;em>The gravitational field has only a relative existence&amp;hellip;&lt;/em> Because for an observer freely falling from the roof of a house - &lt;em>at least in his immediate surroundings&lt;/em> - there exists no gravitational field.&lt;/p>
&lt;/blockquote>
&lt;p>But unless you&amp;rsquo;ve dedicated your life to a professional or religions cause, like Einstein, you&amp;rsquo;re unlikely to mention an achievement as the happiest moment of your life; instead, most people dedicate their lives to relationships&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>.&lt;/p>
&lt;p>Moments of nearness with loved ones is the most obvious source of happy moments. Typical examples include becoming a parent, getting married or exchanging &amp;lsquo;I love you&amp;rsquo;s for the first time.&lt;/p>
&lt;p>For moments of nearness, there&amp;rsquo;s usually an element of novelty or rareness. However, you can also experience extraordinarily joyful moments while carrying out an activity for the millionth time, if you&amp;rsquo;re in a state of flow.&lt;/p>
&lt;p>While there are many kinds of flow, we can crudely speak of either mental or physical flow. Mental flow states include painting, reading a good book or having good conversations; in contrast, physical flow states make crucial use of the body, a good example being runner&amp;rsquo;s high. Cycling or skiing can also put you in a state of physical flow; as can you-know-what&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>.&lt;/p>
&lt;p>Finally, the elephant in the room: drugs. Based on the numerous descriptions of LSD trips in books, movies and music, I assume a few people would mention their best trip as the happiest moment in their life; however, I suspect these people would be in the minority of all people ever having tried drugs. Given that two thirds of Europeans don&amp;rsquo;t try drugs in their lifetime anyway, I wouldn&amp;rsquo;t count trips as a separate category.&lt;/p>
&lt;p>By now, our taxonomy of highs seems relatively complete. Many people would probably mention a personal milestone, a moment with loved ones or a session of flow as the happiest moment in their life, although the details may vary. One Redditer recalled &amp;lsquo;sneaking away from the group at a church camping trip with your best friend and breaking into an abandoned slaughterhouse while pretending you&amp;rsquo;re ninjas&amp;rsquo; as one of his happiest moments &amp;ndash; a very specific instance of nearness and physical flow.&lt;/p>
&lt;p>Taking stock of my own happiest moments, two-three moments stand out, and they fall within several of the above categories. But of course, this list is subject to modification &amp;ndash; I don&amp;rsquo;t believe in the end of history.&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>See Richard M. Berlin&amp;rsquo;s spin off of Einstein&amp;rsquo;s realisation, &lt;em>Einstein&amp;rsquo;s Happiest Moment&lt;/em>.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>Yes. Mihaly Csikszentmihalyi even dedicates a subchapter of &lt;em>Flow&lt;/em> to this topic.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/the-varieties-of-optimal-experience/</guid><pubDate>Sun, 21 Dec 2025 00:00:00 +0000</pubDate></item><item><title>LLMs and the English Language</title><link>https://isabeldahlgren.github.io/llms-and-the-english-language/</link><description>&lt;p>Many LLMs have a &amp;rsquo;thinking mode&amp;rsquo;, also called &amp;lsquo;reasoning mode&amp;rsquo;. Activating thinking mode changes how the model structures its generation: it generates and evaluates more intermediate outputs before its final answer. So, in exchange for more tokens and longer response times, you get a more &amp;rsquo;thoughtful&amp;rsquo; answer. Thinking mode works phenomenally well, so the temptation to use LLMs for thinking can sometimes be irresistible. However, if I want a genuinely thoughtful answer &amp;ndash; if I&amp;rsquo;m writing for mental clarity &amp;ndash; I&amp;rsquo;ll hardly use LLMs at all.&lt;/p>
&lt;h3 id="llms-can-think">LLMs can &amp;rsquo;think&amp;rsquo;&amp;hellip; &lt;a href="#llms-can-think" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Here I&amp;rsquo;ll focus on critical thinking, which, in its broadest sense, is defined as careful, goal-directed thinking&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>. This involves things like analysing arguments, identifying relevant information, questioning well-perceived wisdom, and, finally, drawing sound conclusions.&lt;/p>
&lt;p>To me it seems like critical thinking skills fall into two categories, which I&amp;rsquo;ll call &amp;lsquo;wide&amp;rsquo; and &amp;lsquo;deep&amp;rsquo; thinking. Wide thinking is open-ended and explorative, and aims to gather information; deep thinking is detail-oriented and precise, and churns information into conclusions. Deep thinking is the extraction and evaluation of premises, almost like symbol-free mathematics.&lt;/p>
&lt;p>LLMs are very useful for wide thinking tasks. Having been trained on the entire Internet, they know mind-blowing amounts of information, making them invaluable for literature reviews. When self-studying a new topic, I usually begin by consulting an LLM to get a high-level overview &amp;ndash; this is the efficient variant of reading the Wikipedia introduction.&lt;/p>
&lt;p>And these days, LLMs are also pretty useful for deep thinking. State-of-the-art LLMs already achieve 90% accuracy on benchmarks like GPQA (PhD-level science questions) and MMLU-Pro (problem-solving and knowledge across different subjects). They&amp;rsquo;re also terrific problem solvers: GPT-5.2 solved all tasks at this year&amp;rsquo;s AIME, while a version of Gemini achieved gold-medal standards at this year&amp;rsquo;s IMO&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>.&lt;/p>
&lt;p>In short, LLMs can think both widely and deeply, outputting answers reflecting strong critical thinking skills.&lt;/p>
&lt;h3 id="and-so-can-you">&amp;hellip;and so can you &lt;a href="#and-so-can-you" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Nevertheless, when I&amp;rsquo;m writing to understand, rather than to produce an end product for someone else, I barely use LLMs&lt;sup id="fnref:3">&lt;a href="#fn:3" class="footnote-ref" role="doc-noteref">3&lt;/a>&lt;/sup>. I consult LLMs for literature reviews and gladly read the AI generated overviews of search results on Google, but the rest I do myself&lt;sup id="fnref:4">&lt;a href="#fn:4" class="footnote-ref" role="doc-noteref">4&lt;/a>&lt;/sup>.&lt;/p>
&lt;p>Indeed, the other obvious use cases of LLMs for writing involve outsourcing thinking: I&amp;rsquo;m responsible for creating outlines, drafting paragraphs and formulating individual sentences &amp;ndash; yes, even formulating individual sentences, or else I&amp;rsquo;m not saying exactly what I mean to say.&lt;/p>
&lt;p>This is much like doing the routine checks and exercises for the reader when reading a piece of mathematical writing. To get a gears-level understanding, as is often required, you should work out all the details on a separate piece of paper. Of course, this is rather time-consuming, so I&amp;rsquo;ll readily use LLMs for routine writing tasks, like emails to colleagues and motivation letters.&lt;/p>
&lt;p>In &lt;em>Politics and the English Language&lt;/em> (1946), his attack on vague political writing, Orwell argues that the use of ready-made phrases leads to imprecise thinking:&lt;/p>
&lt;blockquote>
&lt;p>You can shirk it [the pain of thinking for yourself] by simply throwing your mind open and letting the ready-made phrases come crowding in. They will construct your sentences for you – even think your thoughts for you, to a certain extent – and at need they will perform the important service of partially concealing your meaning even from yourself.&lt;/p>
&lt;/blockquote>
&lt;p>I suspect he&amp;rsquo;s right. And today LLMs can give you ready-made essays and research papers. There have recently been many papers on deceitful behaviour in LLMs&lt;sup id="fnref:5">&lt;a href="#fn:5" class="footnote-ref" role="doc-noteref">5&lt;/a>&lt;/sup>, but we can be deceived in many ways.&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>At least according to the &lt;a href="https://plato.stanford.edu/entries/critical-thinking/">Stanford Encyclopedia of Philosophy&lt;/a>. There&amp;rsquo;s a myriad of more specific definitions, though. This &lt;a href="https://philpapers.org/rec/ENNDAT">article&lt;/a> lists 14 philosophical definitions and three dictionary definitions.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>It&amp;rsquo;s still faster for me to typeset proofs myself: if I know the argument inside out, I might as well knock it out myself. However, if I had more money to burn on advanced models and spent an afternoon perfecting a set of prompt templates, using LLMs as TeX slaves would likely save me time.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:3">
&lt;p>And obviously no LLMs for personal writing (essays, emails to friends, etc.).&amp;#160;&lt;a href="#fnref:3" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:4">
&lt;p>This is a matter of what I call &amp;lsquo;LLM hygiene&amp;rsquo;.&amp;#160;&lt;a href="#fnref:4" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:5">
&lt;p>See &lt;a href="https://www.anthropic.com/research/alignment-faking">here&lt;/a>, &lt;a href="https://openai.com/index/detecting-and-reducing-scheming-in-ai-models/">here&lt;/a> and &lt;a href="https://www.anthropic.com/research/sleeper-agents-training-deceptive-llms-that-persist-through-safety-training">here&lt;/a>.&amp;#160;&lt;a href="#fnref:5" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/llms-and-the-english-language/</guid><pubDate>Sun, 14 Dec 2025 00:00:00 +0000</pubDate></item><item><title>Reasons for optimism in technical AI safety</title><link>https://isabeldahlgren.github.io/reasons-for-optimism-in-technical-ai-safety/</link><description>&lt;p>I&amp;rsquo;ve been doing some technical AI safety research this fall. Technical AI safety is about ensuring AI systems do what they were intended to do, and falls under the broader project of making AI &amp;lsquo;go well&amp;rsquo;.&lt;/p>
&lt;p>If you do technical AI safety research, you spend all day imagining safety failures, and, what is more, you spend all day around AI safety-pilled colleagues. This almost certainly affects your judgement, in some way or other &amp;ndash; and this is scary. In particular, it&amp;rsquo;s easy slipping into hedgehog thinking&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>, becoming overly pessimistic and fixating on details. Perhaps one could even speak of &amp;lsquo;AI safety biases&amp;rsquo; to denote any such distortions in one&amp;rsquo;s judgement.&lt;/p>
&lt;p>However, when I reflect on the state of technical AI safety, rather than just reacting to information in my day-to-day, I find myself thinking that things are going pretty well. Here are a few reasons for optimism with respect to challenges in technical AI safety.&lt;/p>
&lt;h3 id="safety-is-a-priority">Safety is a priority &lt;a href="#safety-is-a-priority" class="hash">#&lt;/a>&lt;/h3>
&lt;p>First of all, the three leading AI labs, OpenAI, Anthropic and Google DeepMind, are all taking AI safety seriously: Anthropic and Google DeepMind both have large teams working exclusively on safety research, while OpenAI is open to collaborating with external parties on safety research&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>. Though there&amp;rsquo;s room for improvement, these labs acknowledge that safety should be a priority &amp;ndash; and this shouldn&amp;rsquo;t be taken for granted.&lt;/p>
&lt;p>There are strong financial incentives to roll out new models as quickly as possible, and yet, state-of-the-art models often undergo long periods of safety testing. For example, OpenAI spent half a year aligning GPT-4, arguably the one model which gave us the biggest leap in capabilities up to date, before its public release in March 2023.&lt;/p>
&lt;p>It&amp;rsquo;s also worth meditating on the rapid change in public discourse about AI risk. Just a few years ago, discussions about AI safety were largely confined to rationalist and effective alturism forums online; nowadays, AI lab CEOs are happy to speak of existential risks from power-seeking AI systems. It&amp;rsquo;s striking how quickly the views of doomers became mainstream&lt;sup id="fnref:3">&lt;a href="#fn:3" class="footnote-ref" role="doc-noteref">3&lt;/a>&lt;/sup>.&lt;/p>
&lt;h3 id="safety-is-profitable">Safety is profitable &lt;a href="#safety-is-profitable" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Another reason for optimism is that commercial interests often push towards safer models.&lt;/p>
&lt;p>In practise, the dichotomy between capabilities and safety research is often false &amp;ndash; a deceptive, scheming sycophant wouldn&amp;rsquo;t make for a good product. In one of my current projects, which focuses on safety failures in graphical user interface agents, we&amp;rsquo;ve been struggling to come up with tasks where a highly capable, &amp;rsquo;evil&amp;rsquo; AI would output something different from a dumb AI model misunderstanding user needs&lt;sup id="fnref:4">&lt;a href="#fn:4" class="footnote-ref" role="doc-noteref">4&lt;/a>&lt;/sup>.&lt;/p>
&lt;p>The apparent tension between capabilities and safety research can be traced back to the orthogonality thesis, which posits that a superintelligence could pursue malevolent goals &amp;ndash; intelligence is &amp;lsquo;orthogonal&amp;rsquo; to having good character, as it were. While I think this is a reasonable assumption, intelligence is rarely orthogonal to commercial interests; in reality, they&amp;rsquo;re often strongly correlated.&lt;/p>
&lt;h3 id="safety-is-hot">Safety is hot &lt;a href="#safety-is-hot" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Finally, technical AI safety has become extremely popular over the last few years, as highligted in Will MacAskill&amp;rsquo;s &lt;a href="https://forum.effectivealtruism.org/posts/R8AAG4QBZi5puvogR/effective-altruism-in-the-age-of-agi#Reason__1__Neglected_cause_areas">article&lt;/a> on effective altruism in the age of AI. Jobs in technical AI safety are well-paid, high-status and can offer a strong sense of purpose. Moreover, the day-to-day looks much like the job of an ML engineer, so technical AI safety work naturally appeals to number- and computerphiles. Today, the field isn&amp;rsquo;t bottlenecked by money or talent, but rather by mentorship &amp;ndash; and this is the dream scenario. This is precisely what a booming field should look like.&lt;/p>
&lt;h3 id="adjusting-for-the-ai-safety-bias">Adjusting for the AI safety bias &lt;a href="#adjusting-for-the-ai-safety-bias" class="hash">#&lt;/a>&lt;/h3>
&lt;p>In summary, I&amp;rsquo;m quite happy with the state of technical AI safety. It&amp;rsquo;s growing extremely fast, and I know many thoughtful, highly skilled people, both from industry and academia, looking to pivot to the field. If you have long AI timelines like me, you have good reason to believe that AI safety research will keep existing models in check, so we&amp;rsquo;ll succeed in building powerful agents doing what they&amp;rsquo;re intended to do. In brief, I think AI is likely to &amp;lsquo;go well&amp;rsquo; &amp;ndash; at least on the technical front&lt;sup id="fnref:5">&lt;a href="#fn:5" class="footnote-ref" role="doc-noteref">5&lt;/a>&lt;/sup>.&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>In his 1953 essay, Isaia Berlin introduced the analogy of the hedgehog and the fox to represent two distinct modes of thinking: the hedgehog reasons from first principles and tries reducing complex problems to their core, while the fox acknowledges complexity and tries to aggregate different views. Hedgehog thinking is more effective for solving technical tasks.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>A good example is the &lt;a href="https://www.arxiv.org/abs/2509.15541">anti-scheming paper&lt;/a> from OpenAI and Apollo.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:3">
&lt;p>See Clara Collier&amp;rsquo;s excellent &lt;a href="https://asteriskmag.com/issues/11/iabied">book review&lt;/a> of &lt;em>If Anyone Builds It, Everyone Dies&lt;/em>.&amp;#160;&lt;a href="#fnref:3" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:4">
&lt;p>For those with some background in AI safety, have a look at the side tasks from the &lt;a href="https://arxiv.org/pdf/2506.15740">SHADE-Arena paper&lt;/a>, which tests models&amp;rsquo; abilities to carry out &amp;lsquo;malicious&amp;rsquo; side tasks in complex, long-horizon settings. A weaker AI might well execute tasks similar to the malicious side tasks by accident.&amp;#160;&lt;a href="#fnref:4" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:5">
&lt;p>And my &amp;rsquo;likely&amp;rsquo; means something like a ≥60% probability.&amp;#160;&lt;a href="#fnref:5" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/reasons-for-optimism-in-technical-ai-safety/</guid><pubDate>Sun, 07 Dec 2025 00:00:00 +0000</pubDate></item><item><title>On shame</title><link>https://isabeldahlgren.github.io/on-shame/</link><description>&lt;p>There are many negative emotions: envy, anger, fear, worry, sadness, to name a few. Positive emotions don&amp;rsquo;t carry as much nuance. When you&amp;rsquo;re asked how you&amp;rsquo;re doing, assuming you&amp;rsquo;re doing good, you&amp;rsquo;ll probably reply you&amp;rsquo;re simply &amp;lsquo;doing good&amp;rsquo;. Positive emotions like happiness, fulfilment or excitement appear to be more strongly correlated with one another &amp;ndash; for this reason, they&amp;rsquo;re often lumped together under the generic term &amp;lsquo;happiness&amp;rsquo;&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>.&lt;/p>
&lt;p>Among negative experiences, worry is the most prevalent. In a &lt;a href="https://www.gallup.com/analytics/349280/state-of-worlds-emotional-health.aspx">2024 Gallup study&lt;/a> where adults were asked which feelings they&amp;rsquo;d experienced &amp;lsquo;a lot&amp;rsquo; the previous day, 39% and 37% reported feelings of worry and stress, respectively. Next followed physical pain (32%), sadness (26%) and anger (22%)&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>.&lt;/p>
&lt;p>While moderate amounts of worry might be productive, some unpleasant feelings seem wholly negative. And the worst of unpleasant feelings is shame.&lt;/p>
&lt;h3 id="shame">Shame &lt;a href="#shame" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Shame is unlike worry, stress, pain, sadness or anger; it&amp;rsquo;s sometimes defined as an uncomfortable feeling of guilt, and is often related to a loss of respect. For me, the most common source of shame is mistakes. Most such mistakes are inconsequential &amp;ndash; even the worst of typos won&amp;rsquo;t kill &amp;ndash; but they impinge on my honour. I feel particularly ashamed when reflecting on social blunders, times when I failed to respect norms or hierarchies.&lt;/p>
&lt;h3 id="coping-with-shame">Coping with shame &lt;a href="#coping-with-shame" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Part of why shame is so insidious is that you can&amp;rsquo;t do much about it: when you&amp;rsquo;re ashamed, part of you is playing the role of the admonishing teacher. You cannot make yourself shameless, as it were, because that would involve contradicting yourself. Sometimes you can get rid of shame by redoing things the right way, though this is impossible for many social situations. So, you err, you&amp;rsquo;re ashamed, you learn, and you&amp;rsquo;re still ashamed.&lt;/p>
&lt;p>This said, sometimes you shouldn&amp;rsquo;t learn: you shouldn&amp;rsquo;t necessarily let shame influence your behaviour. There are many wicked social norms and beauty ideals causing shame, and these must be ignored. For example, I sometimes feel ashamed for asking &amp;rsquo;too many&amp;rsquo; questions during lectures, even when they&amp;rsquo;re well motivated: women shouldn&amp;rsquo;t take up too much space, and especially not in the context of mathematics. Another example: I sometimes have to remind myself I&amp;rsquo;m free to not shave my legs. A complicating aspect of shame, then, is that you can be ashamed although you&amp;rsquo;re right and they wrong.&lt;/p>
&lt;h3 id="the-intensity-of-shame">The intensity of shame &lt;a href="#the-intensity-of-shame" class="hash">#&lt;/a>&lt;/h3>
&lt;p>The worst thing about shame, however, is its intensity. In my experience, no other negative emotion can be as violent. Even moderate amounts of shame can be unbearable, making you want to vanish from the surface of the earth for a while. The force of emotion can being debilitating, preventing you from learning whatever lessons are to be learned.&lt;/p>
&lt;p>And whereas intense anger or sadness can be pleasurable, a conundrum first discussed in &lt;em>Poetics&lt;/em>, shame just feels terrible. There are no tragedies, sad movies or sub-genres of emo music meant to elicit shame &amp;ndash; shame doesn&amp;rsquo;t lead to catharsis; shame isn&amp;rsquo;t pure emotion. Instead, shame responses are complicated and highly culturally dependent.&lt;/p>
&lt;h3 id="societies-of-shame">Societies of shame &lt;a href="#societies-of-shame" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Shame is a powerful driver of human behaviour, and it has been exploited to enforce social institutions since forever. For instance, shame compels people to be faithful to their spouses and respect social hierarchies, and most successful world religions have incorporated an element of shame to ensure their growth. Societies are largely built on shame, especially those where there&amp;rsquo;s a strong culture of honour.&lt;/p>
&lt;p>So shame serves an important role, but only on a social level. Most other negative emotions can benefit the individual; shame only benefits the individual insofar as it increases their social status.&lt;/p>
&lt;p>Humans are sometimes willing to inspire anger, fear, envy &amp;ndash; any negative emotion really, except for shame. As noted by Hjalmar Söderberg in &lt;em>Doctor Glas&lt;/em>, we&amp;rsquo;re desperate for human contact:&lt;/p>
&lt;blockquote>
&lt;p>One wants to be loved; failing that, admired; failing that, feared; failing that, hated and despised. One wants to instill some sort of feeling in people. The soul shudders at the void and wants contact at any price.&lt;/p>
&lt;/blockquote>
&lt;p>And shame is the opposite of human contact.&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>I&amp;rsquo;m just re-expressing the opening sentence of &lt;em>Anna Karenina&lt;/em> in a roundabout way: &amp;lsquo;All happy families are alike; each unhappy family is unhappy in its own way.&amp;rsquo;&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>Compared with the 2006 data, all numbers have risen, on average by 7 percentage points, though the ranking of negative feelings has remained pretty stable.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/on-shame/</guid><pubDate>Sun, 30 Nov 2025 00:00:00 +0000</pubDate></item><item><title>Normal situations, abnormal norms</title><link>https://isabeldahlgren.github.io/normal-situations-abnormal-norms/</link><description>&lt;p>Most people lead a comfortable 9-5 life. You go to work, work away, go home, sometimes seeing friends or engaging in extracurriculars. Even so, sometimes you find yourself in unusual social situations, situations where a first-timer wouldn&amp;rsquo;t know how to behave. Here are some normal situations with abnormal social norms.&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Flights:&lt;/strong> On a 12-hour flight, you find a cross section of the population bored, tired and without Internet. In this willpower-reduced state, snobs dress in sweatpants and workaholics watch thrashy feel-good movies. A surprising number of people try napping&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>, while others strike up conversations with strangers.&lt;/li>
&lt;li>&lt;strong>Dates:&lt;/strong> It&amp;rsquo;s common for people taking steps to get to know new people, for example by attending local meet-ups; however, friend-seekers rarely try getting to know others well. Dates are among the few situations where people deliberately try building deep connections. Think about it: the purpose of a date is for two people, usually strangers, to become romantic partners. Pretty remarkable, right? Even more remarkable is that it sometimes works.&lt;/li>
&lt;li>&lt;strong>Alcohol:&lt;/strong> Alcohol creates a form of social anarchy. Moderately tipsy people are amusing &amp;ndash; they&amp;rsquo;re a bit like children, candid and ignorant of social norms, though they enjoy discussing grown-up topics.&lt;/li>
&lt;li>&lt;strong>Work interviews:&lt;/strong> A work interview is nothing at all like an ordinary conversation, despite appearances; it&amp;rsquo;s like an improv play where the interviewee tries selling themself and the interviewer maintains a poker face. The interviewee presents themself as the ideal employee: competent, reliable and a good coffee machine conversationalist&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>.&lt;/li>
&lt;li>&lt;strong>Music festivals:&lt;/strong> During music festivals, thousands of sweaty people packed like sardines scream in unison while jumping up and down. Though they&amp;rsquo;re strangers, they share a sense of camaraderie, sometimes even sharing food and belongings&lt;sup id="fnref:3">&lt;a href="#fn:3" class="footnote-ref" role="doc-noteref">3&lt;/a>&lt;/sup>. Cult?&lt;/li>
&lt;li>&lt;strong>Summer camps:&lt;/strong> There are summer camps for kids as well as adults, though the latter are sometimes called &amp;lsquo;retreats&amp;rsquo; or &amp;lsquo;get-aways&amp;rsquo;. The goal of most summer camps is for a group of strangers to pursue a common interest together and make new friends. Being in a remote place helps create a space for intimacy: the camp participants tacitly agree that &amp;lsquo;what gets said here stays here&amp;rsquo;.&lt;/li>
&lt;li>&lt;strong>Military service:&lt;/strong> Life in the military is famously totalitarian: it&amp;rsquo;s all about obeying your superiors. And if you do mandatory military service, you&amp;rsquo;re normally at the bottom of the pecking order. In no other setting is there such a clear hierarchy, and in no other setting is outright bullying socially acceptable.&lt;/li>
&lt;/ul>
&lt;p>These are all time-bound activities disconnected from everyday life, usually physically or by means of clothing, with extraordinary social norms. This makes them good settings for people-watching. Why study &amp;lsquo;man in a state of nature&amp;rsquo; &amp;ndash; an impossibility anyway &amp;ndash; if you can study &amp;lsquo;man in a state of society&amp;rsquo;?&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>Perhaps this is telling of our society.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>Any weaknesses of the interviewee are, naturally, just their virtues in excess: &amp;lsquo;I&amp;rsquo;m somewhat stubborn, but that&amp;rsquo;s just because I&amp;rsquo;m so driven.&amp;rsquo;&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:3">
&lt;p>Every music festival has a bit of Woodstock to it.&amp;#160;&lt;a href="#fnref:3" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/normal-situations-abnormal-norms/</guid><pubDate>Sun, 23 Nov 2025 00:00:00 +0000</pubDate></item><item><title>Techno-optimism - techno-altruism?</title><link>https://isabeldahlgren.github.io/techno-optimism-techno-altruism/</link><description>&lt;p>You might think the average worker in the tech industry is a diehard capitalist, since tech is full of rich entrepreneurs. Moreover, many big tech companies appear to be driven more by money-making than by providing consumer value: Facebook didn&amp;rsquo;t connect us with friends and family and OpenAI didn&amp;rsquo;t remain a safety-oriented non-profit.&lt;/p>
&lt;p>However, most tech hubs are overwhelmingly liberal: 64% of voters in San Fransisco are registered as democrats; 8% as republican. Many programmers also dream to create widely accessible software, software for anyone, rich or poor, with stable Internet connection.&lt;/p>
&lt;p>Sometimes I think that the grandfathers of communism would be tech bros had they lived today. Technology has allowed for the scaling of selfless, altruistically motivated side projects. Here are three notable examples.&lt;/p>
&lt;h3 id="open-source-software">Open source software &lt;a href="#open-source-software" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Open source software, code freely available for anyone to use or modify, is a remarkable phenomenon. Although researchers routinely shared code since the early days of computing in the 1950s, open source became a thing in the 1990s. Today open source software is used in many critical technologies; for example, Python, Git and OpenSSH (the most popular SSH implementation) are all open source.&lt;/p>
&lt;p>One 2024 Harvard study estimated the overall value of open source to be $8.8 trillion, about the GDP of France, Germany and the Netherlands combined. The study also estimated that firms would have to spend 3.5 times as much time on software development if open source wouldn&amp;rsquo;t have existed. Though you should expect huge error bars to these numbers, it&amp;rsquo;s clear that open source generates a lot of value.&lt;/p>
&lt;p>We&amp;rsquo;re lucky things panned out this way &amp;ndash; the success of open source doesn&amp;rsquo;t seem inevitable. Considering the value they generate, open source developers receive way too little credit. They&amp;rsquo;ll have highly starred repos on GitHub, which is kind of a flex, but that&amp;rsquo;s about it; meanwhile, their peers earn six-figure salaries. Also, remember that iOS and Microsoft are closed source, as well as most of the cutting-edge AI models&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>. So, one could certainly imagine all the buy-me-coffee buttons being replaced by paywalls.&lt;/p>
&lt;h3 id="opencourseware">OpenCourseWare &lt;a href="#opencourseware" class="hash">#&lt;/a>&lt;/h3>
&lt;p>In 1999, the University of Tübingen in Germany published a series of lectures freely available online, marking the beginning of the OpenCourseWare (OCW) movement. Three years later, MIT launched their OCW, the most ambitious OCW up to date, now offering 2.500 courses with lecture notes, problem sets and exams. Other American elite universities soon followed, publishing learning materials from their most popular courses online. There are also joint OCW initiatives, like edX, allowing anyone to audit courses on topics ranging from corporate finance to aerospace engineering.&lt;/p>
&lt;p>And so we find ourselves living in a world where the average person can get to the level of a first-year undergrad in any subject without leaving their bedroom&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>. Considering the state of human knowledge in 2025, having done one year of bachelor&amp;rsquo;s in three to five subjects, you&amp;rsquo;d know more than most Renaissance men.&lt;/p>
&lt;h3 id="wikipedia">Wikipedia &lt;a href="#wikipedia" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Wikipedia was launched in January 2001 by Internet enthusiasts Jimmy Wales and Larry Sanger. There had been previous attempts at creating free online encyclopaedias, starting in 1993 with Rick Gates&amp;rsquo; initiative to create Interpedia, or The Internet Encyclopedia. In March 2000, Wales and Sanger founded Nupedia, an encyclopaedia written by volunteers with subject-matter expertise. Nupedia didn&amp;rsquo;t take off, largely in part because of the extensive peer-review process, and the idea behind Wikipedia was for people to submit entries that would go into Nupedia&amp;rsquo;s peer-review process. But Wikipedia exploded: while Nupedia had only received 12 articles when Wikipedia was launched, Wikipedia had 18.000 articles in January 2002.&lt;/p>
&lt;p>Today Wikipedia is absolutely huge. As of November 2025, the English Wikipedia has around 7 million articles. By contrast, Britannica has fewer than 150.000 articles&lt;sup id="fnref:3">&lt;a href="#fn:3" class="footnote-ref" role="doc-noteref">3&lt;/a>&lt;/sup>. Also, considering that anyone can edit Wikipedia, the articles are surprisingly high-quality: a 2005 study found that the average science article in Wikipedia contained four inaccuracies; Britannica, around three.&lt;/p>
&lt;p>It&amp;rsquo;s worth meditating on this for a moment. Who could have predicted the success of Wikipedia? Imagine the amount of expert human labour that goes into each article &amp;ndash; labour which, just as for open source development, goes unrecognised. Today you can learn anything about everything by googling, and this is largely thanks to Wikipedia.&lt;/p>
&lt;h3 id="final-words">Final words &lt;a href="#final-words" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Open source, OpenCourseWare and Wikipedia all rely on the Internet, and in turn, on brogrammers happy to spend their weekends doing unpaid work&lt;sup id="fnref:4">&lt;a href="#fn:4" class="footnote-ref" role="doc-noteref">4&lt;/a>&lt;/sup>. These projects fundamentally depend on people&amp;rsquo;s goodwill.&lt;/p>
&lt;p>It&amp;rsquo;s wonderful seeing open source facilitating collaborative development and OCW and Wikipedia enabling lifelong learning. However, what I find most fascinating about these projects is that they demonstrate that pure altruism is possible at a large scale. Considering this, I can imagine Marx smiling in his grave.&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>More generally, considering that the drug discovery process seems so closed, you might marvel at the mere existence of open source.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>In-person learning is far more effective than online learning, mostly because of the social context, so I wouldn&amp;rsquo;t go so far as saying that you could learn the content of a full bachelor&amp;rsquo;s remotely.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:3">
&lt;p>On Britannica&amp;rsquo;s website, it says that members gain access to &amp;ldquo;over 130.000&amp;rdquo; articles, so I assume the number of articles is below 150.000.&amp;#160;&lt;a href="#fnref:3" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:4">
&lt;p>Perhaps the most striking feature of these projects is that they all began as side-projects. See how productive slack can be?&amp;#160;&lt;a href="#fnref:4" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/techno-optimism-techno-altruism/</guid><pubDate>Sun, 16 Nov 2025 00:00:00 +0000</pubDate></item><item><title>What effective altruism misses</title><link>https://isabeldahlgren.github.io/what-effective-altruism-misses/</link><description>&lt;p>Effective altruism (EA) centres around a dozen or so &amp;ldquo;cause areas&amp;rdquo;, areas in which you can have a big positive impact. Common cause areas include AI safety, global poverty, farm animal welfare, biorisk, nuclear security and EA field building. As of a couple of years, AI safety is very hot within EA, with some of the EA golden boys, like Will MacAskill and Holden Karnofsky, making AI going well their top priority.&lt;/p>
&lt;p>Say that EAs solve the problems associated with the above cause areas. Humanity has smoothly transitioned to a post-AGI world, eliminated global poverty, put an end to factory farming and reduced existential risk to zero. While this sounds like a rosy vision of the future, unless you live in extreme poverty, your life in this hypothetical world will be the same. Those who regularly worry about all the world&amp;rsquo;s problems will find something else to worry about. And if you are dissatisfied with life &amp;ndash; which, if you&amp;rsquo;re anything like the average American, isn&amp;rsquo;t entirely unlikely&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup> &amp;ndash; you&amp;rsquo;ll likely remain dissatisfied.&lt;/p>
&lt;p>EA, then, appears to miss something very fundamental about what it means to be human&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>. If EAs want to maximise positive impact, they should make wellbeing a core cause area.&lt;/p>
&lt;h3 id="what-i-mean">What I mean &lt;a href="#what-i-mean" class="hash">#&lt;/a>&lt;/h3>
&lt;p>The cause area of wellbeing, as I see it, could be concerned with the following question: How can we build better societies, where the average citizen has a higher baseline level of wellbeing? Here, I use the term &amp;ldquo;wellbeing&amp;rdquo; to mean &amp;ldquo;mental health&amp;rdquo; or &amp;ldquo;welfare&amp;rdquo;.&lt;/p>
&lt;p>Imagine a society organised around the principle of improving wellbeing in the population at large, rather than around some ideology. While I don&amp;rsquo;t know exactly what such a society would look like, current societies leave a lot to be desired.&lt;/p>
&lt;p>Some ideas that come to mind. Can we create societies where everyone feels a sense of community? Societies where people report high levels of life satisfaction well into their 80s? Societies where teenagers take care of one another?&lt;/p>
&lt;p>Designing such societies is a long-term goal. In the near-term, we must fix broken healthcare systems, e.g. by providing widespread access to therapy. In America, about &lt;a href="https://www.nimh.nih.gov/health/statistics/mental-illness">50%&lt;/a> of adults with mental illness receive no treatment&lt;sup id="fnref:3">&lt;a href="#fn:3" class="footnote-ref" role="doc-noteref">3&lt;/a>&lt;/sup>.&lt;/p>
&lt;h3 id="why-eas-should-care">Why EAs should care &lt;a href="#why-eas-should-care" class="hash">#&lt;/a>&lt;/h3>
&lt;p>EAs have good grounds to make wellbeing a core cause area. An &lt;a href="https://80000hours.org/career-guide/most-pressing-problems/">SNT analysis&lt;/a> reveals big potential for positive impact.&lt;/p>
&lt;p>To begin with, the &amp;ldquo;wellbeing problem&amp;rdquo; is huge in scope. According to &lt;a href="https://www.thelancet.com/journals/eclinm/article/PIIS2589-5370(22)00405-9/fulltext">one estimate&lt;/a>, mental health disorders accounted for about 16% of global DALYs&lt;sup id="fnref:4">&lt;a href="#fn:4" class="footnote-ref" role="doc-noteref">4&lt;/a>&lt;/sup> in 2019. If everyone had a baseline level of mental health, we&amp;rsquo;d see many positive downstream effects. Of course, EAs have &lt;a href="https://www.lesswrong.com/posts/dyouimJ5AAg5mSpQ7/rethink-wellbeing-s-year-2-update-foster-sustainable-high">estimated the benefits of merely treating EAs suffering from depression&lt;/a>, making them more effective in their altruistic endeavours. Moreover, as material living standards increase, mental health problems might become the biggest obstacle towards humanity&amp;rsquo;s flourishing. Who cares if we can colonise other galaxies if &lt;a href="https://www.pewresearch.org/social-trends/2025/01/16/men-women-and-social-connections/#:~:text=About%20one-in-six%20Americans%20(16%25)%20say%20they%20feel,of%20the%20time%20%E2%80%93%20including%20roughly%20equal">one in six men&lt;/a> reports being lonely?&lt;/p>
&lt;p>The topic of mental health also seems neglected in EA circles. The classic cause areas, along with AI safety, dominate the EA discourse. To my knowledge, there are only two wellbeing-oriented EA organisations: &lt;a href="https://www.rethinkwellbeing.org/">Rethink wellbeing&lt;/a>, which offers therapy to altruists, and the &lt;a href="https://www.rethinkwellbeing.org/">Happier Lives Institute&lt;/a>, which finds cost-effective charities to improve wellbeing. However, I&amp;rsquo;d love to see EA think tanks focused on designing happier societies. Designing happier societies is a huge social engineering task, where an EA mindset of truth-seeking and scope-sensitivity would be useful.&lt;/p>
&lt;p>Finally, improving wellbeing appears to be a relatively tractable problem. Sound decision-making in healthcare can have seismic effects, and now, with a growing body of research on wellbeing, policy-makers can make more well informed decisions. Efficient policy advocacy might be hard, but it shouldn&amp;rsquo;t be any harder in healthcare than in AI&lt;sup id="fnref:5">&lt;a href="#fn:5" class="footnote-ref" role="doc-noteref">5&lt;/a>&lt;/sup>.&lt;/p>
&lt;h3 id="towards-a-better-future">Towards a better future &lt;a href="#towards-a-better-future" class="hash">#&lt;/a>&lt;/h3>
&lt;p>If EAs want to maximise happiness, they should cut to the chase, making wellbeing a central cause area. Once those who need help receive it, think about how to design better societies. Don&amp;rsquo;t aim for utopia &amp;ndash; for a start, make teenagers happy again, eliminate male loneliness, and restore the elderly&amp;rsquo;s sense of dignity. While everyone experiences &amp;ndash; and probably should experience &amp;ndash; some degree of existential angst, a society doesn&amp;rsquo;t need to have depressed incels, burned-out careerists and disoriented &lt;a href="https://en.wikipedia.org/wiki/NEET">NEETs&lt;/a>.&lt;/p>
&lt;p>I agree with the sentiment that EA is becoming somewhat outdated, expressed by Will MacAskill in his article on &lt;a href="https://forum.effectivealtruism.org/posts/R8AAG4QBZi5puvogR/effective-altruism-in-the-age-of-agi">EA in the age of AGI&lt;/a>. In order for EA to stay relevant, the movement needs to reprioritise cause areas. However, rather than morphing into a subfield of AI safety, EAs should ask what would fundamentally create a better world.&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>&lt;a href="https://news.gallup.com/poll/610133/less-half-americans-satisfied-own-lives.aspx">One in five Americans&lt;/a> say they&amp;rsquo;re dissatisfied with life.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>Maybe the best evidence for this claim is that EA only attracts certain kinds of people.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:3">
&lt;p>In practise, therapy is restricted to those with strong negotiation skills, who can convince healthcare workers they need priority, and CEOs, who receive therapy in the form of CEO coaching.&amp;#160;&lt;a href="#fnref:3" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:4">
&lt;p>DALYs, disability-adjusted life years, roughly measure the overall negative effects of a disease.&amp;#160;&lt;a href="#fnref:4" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:5">
&lt;p>I have limited experience with policymaking though, so take the point about tractability with a grain of salt.&amp;#160;&lt;a href="#fnref:5" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/what-effective-altruism-misses/</guid><pubDate>Sun, 09 Nov 2025 00:00:00 +0000</pubDate></item><item><title>The Sanremese</title><link>https://isabeldahlgren.github.io/the-sanremese/</link><description>&lt;p>In Europe, you don&amp;rsquo;t have to travel far to experience a completely different culture. If you randomly select a point in Europe outside of Russia, you&amp;rsquo;re never further than &lt;a href="https://maps.app.goo.gl/oY6zXbdFeQCc6Xqo7">10h by train from the nearest nation border&lt;/a>. There are 44 countries and 24 official languages in Europe &amp;ndash; yet, the area of Europe is just 60% of the area of the American South.&lt;/p>
&lt;p>I expected Sanremo to be like some of the towns by the French Riviera I&amp;rsquo;d visited when living in Aix, only with better ravioli and friendlier people. Although I wasn&amp;rsquo;t entirely wrong, I was far from correct.&lt;/p>
&lt;p>Founded in Roman times, Sanremo became a resort destination in the mid-1800s, hosting European nobility from Russia and Prussia. These days, tourism remains the primary source of economic activity, and, just like the average tourist destination by the Mediterranean, its city center is dominated by restaurants and clothing boutiques.&lt;/p>
&lt;p>However, Sanremo has a few peculiarities distinguishing it from some of the similarly sized towns on the other side of the French-Italian border. For example, much of the city smells of flowers, earning it the nickname &amp;ldquo;La Città dei Fiori&amp;rdquo; (&amp;ldquo;The City of Flowers&amp;rdquo;). But the biggest differences come from observing people on the streets, the Sanremese.&lt;/p>
&lt;p>The first thing you&amp;rsquo;ll notice is that the posh Sanremese wear Gucci rather than Louis Vuitton. The ladies don&amp;rsquo;t try being chic, as in some of the coastal cities in France invaded by Parisians; instead, they dress in Italian designer clothes. Some of them seem to dress to maximise the number of logos visible to passersby. Among men, beige suits aren&amp;rsquo;t as popular as in Nice. Men in Sanremo make more efforts to be macho, often wearing tracksuits or tight-fitting Armani T-shirts.&lt;/p>
&lt;p>However, the rich in Sanremo don&amp;rsquo;t own the city, as they do in Monaco or Nice. Most people on the streets are perfectly normal Italians. In fact, I met some stereotypically Italian Italians: I met grannies 120 cm tall, grandpas with flat caps and pipes and men with giant calves on Bianchis.&lt;/p>
&lt;p>Not only are the tourists in Sanremo fewer in number, but they are also of a different kind. On the French Riviera, &lt;a href="https://www.nice-premium.com/nice-records-a-strong-increase-in-tourism-in-2024/">most tourists come from either Britain, Germany or America&lt;/a>. During my three-day stay, I heard almost no English or German. I heard a lot of French though &amp;ndash; this is hardly surprising, given that everything is 20% cheaper on the Italian side of the border.&lt;/p>
&lt;p>In Nice and Sanremo, the proportion of foreign residents is somewhere between 10 and 20%. While foreign residents in Nice mostly &lt;a href="https://nicepresse.com/nice-un-habitant-sur-trois-est-un-immigre/">come from the former French colonies&lt;/a>, Sanremo has &lt;a href="https://www.tuttitalia.it/liguria/39-sanremo/statistiche/cittadini-stranieri-2024/">a large population of Eastern Europeans&lt;/a>. In the late 1800s, there was a particularly strong Russian community in Sanremo &amp;ndash; one of the city&amp;rsquo;s most famous buildings is the Russian Orthodox cathedral from 1912.&lt;/p>
&lt;p>Despite having a population of just 55,000, Sanremo is surprisingly diverse. It&amp;rsquo;s neither a dead rural village nor a tourist enclave like some of the French towns 50 km West. There&amp;rsquo;s a good balance between locals and non-locals, between normal and abnormal. Sanremo feels like one of those resort towns from the 1900s that&amp;rsquo;s actually wholesome.&lt;/p></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/the-sanremese/</guid><pubDate>Sun, 02 Nov 2025 00:00:00 +0000</pubDate></item><item><title>On cities</title><link>https://isabeldahlgren.github.io/on-cities/</link><description>&lt;p>The 17th century was a century of rapid human progress. There was the Scientific Revolution. Philosophers like John Locke and Thomas Hobbes laid the groundwork for modern political thought, while René Descartes and Blaise Pascal made important contributions to philosophy more broadly. Meanwhile, in the arts, William Shakespeare redefined English literature, while Rembrandt changed the way artists approach their subjects.&lt;/p>
&lt;p>The above mentioned people have something important in common, other than being extremely intelligent white men. They spent non-negligible parts of their lives in cities. In the 1600s, only &lt;a href="https://ourworldindata.org/grapher/long-term-urban-population-region?time=1500..latest&amp;amp;country=Eastern+Africa~Western+Europe~Korea~Central+Europe~OWID_EUR">8%&lt;/a> of the European population lived in cities. Many people spent their entire lives without setting foot in a big city. But there is a sense in which you need to spend some time in a city to become an influential historical figure, or at least if you wish to understand human societies. Cities tell thousands of tales about history, culture and economics.&lt;/p>
&lt;h3 id="in-history">In history &lt;a href="#in-history" class="hash">#&lt;/a>&lt;/h3>
&lt;p>To begin with, cities serve as historical documents. A city&amp;rsquo;s architecture provides insights into the lives of its inhabitants, both of the past and present. Consider, for example, the massive apartment complexes of East Berlin or the colourful houses perched on the hills in Valparaíso.&lt;/p>
&lt;p>Monuments tell us about the values, artistic ideals and technologies at a given time in history. The Blue Mosque of Istanbul marks the historical influence of Islam, while the Dohány Street Synagogue in Budapest marks the influence of the Jewish community in Budapest. The Burj Khalifa is a remarkable engineering achievement, as well as a symbol of wealth and vanity.&lt;/p>
&lt;p>A city might also contain more explicit historical references, such as memorials. Memorials, being explicit expressions of regret, grief and triumph, reveal the citizens&amp;rsquo; relation to history. The Monument to the Discoveries in Lissabon, built as late as in 1960, is a glorification of the country&amp;rsquo;s colonial past.&lt;/p>
&lt;h3 id="in-culture">In culture &lt;a href="#in-culture" class="hash">#&lt;/a>&lt;/h3>
&lt;p>But cities aren&amp;rsquo;t just of interest to historians. Movies, books and plays need to take place somewhere where there&amp;rsquo;s life, and there&amp;rsquo;s no more life than in a global metropolis. Rural areas are sometimes referred to as &amp;ldquo;dead&amp;rdquo;, while New York never sleeps. In fact, many pieces of narrative art appear to be about the city itself, the plot being a mere formality. Some novels are riddled with location references to the point of becoming difficult to follow for non-locals; consider &lt;em>Ulysses&lt;/em>, for instance.&lt;/p>
&lt;p>Cities often referenced in popular culture become cities of our imagination, making them tourist magnets. Paris owes a large part of its tourism to the innumerable cultural references to the French capital. Indeed, &amp;ldquo;American expats loafing in Paris&amp;rdquo; could almost be considered its own literary genre.&lt;/p>
&lt;p>Just as the characters they create, artists are also products of cities. This has given rise to another kind of cultural tourism, whereby visitors visit key locations in an artist&amp;rsquo;s life. Prague doesn&amp;rsquo;t just brand itself in terms of cheap beer, but also as the city of Franz Kafka.&lt;/p>
&lt;h3 id="in-economics">In economics &lt;a href="#in-economics" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Some cities aren&amp;rsquo;t defined by their historical and cultural heritage, but rather by economical factors. Such cities shed light on how the global economy works, as well as the human desire to make it big. For this reason, many accounts of economic history involve lengthy descriptions of cities.&lt;/p>
&lt;p>To analyse cities from an economical viewpoint, Max Weber introduced the distinction between &amp;ldquo;producer cities&amp;rdquo; and &amp;ldquo;consumer cities&amp;rdquo; in his 1921 article &lt;em>The City&lt;/em>. Producer cities are organised around some specific trade, while consumer cities are rather like administrational centres with residents having access to certain streams of income. Think Detroit versus Washington DC or, for a European example, Antwerpen versus Brussels. An extreme example of a producer city is Kiruna, a city in the north of Sweden, built in the 1890s to serve the Kiruna Mine.&lt;/p>
&lt;p>Certain cities are defined by the human capital they attract, the quintessential example being San Francisco. Startup founders pilgrim to Silicon Valley, since that&amp;rsquo;s where everything happens. For many San Franciscans, The City by the Bay is more of a transit point than a proper home. University towns are similar, in that a large part of its population aren&amp;rsquo;t permanent residents.&lt;/p>
&lt;h3 id="understanding-people-through-cities">Understanding people through cities &lt;a href="#understanding-people-through-cities" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Much of human innovation, whether it be in culture, economics or arts, takes place in cities. Wonders happen when smart people with diverse backgrounds form tightly knit communities, as in, say, Bloomsbury in the 1910s. Such communities are more likely to arise in cities. However, this isn&amp;rsquo;t the sole reason cities are important drivers of human progress. To drive real human progress, you need to understand humans and the societies they build. If you want to understand people &amp;ndash; their fears, desires and rituals &amp;ndash; understand cities.&lt;/p>
&lt;p>Cities are storytellers. They tell you thousands of stories in a most engaging way, all at once. Even a moderately sized city can tell you everything you&amp;rsquo;ve ever wanted to know. You just have to listen.&lt;/p>
&lt;p>&lt;em>Thanks to Raphaëlle Cohen, Emil Wasteson Wallén and Miles Kodama for feedback on this text.&lt;/em>&lt;/p></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/on-cities/</guid><pubDate>Sun, 26 Oct 2025 00:00:00 +0000</pubDate></item><item><title>The mad scientist is probably mad</title><link>https://isabeldahlgren.github.io/the-mad-scientist-is-probably-mad/</link><description>&lt;p>Many pioneering scientists were initially rejected by the scientific community. At first, the ideas of Galileo Galilei, Gregor Mendel and Geoffrey Hinton were ridiculed. There are many more examples of such &amp;ldquo;martyrs of science&amp;rdquo;, people who were cancelled and then vindicated.&lt;/p>
&lt;p>Science isn&amp;rsquo;t as dramatic nowadays. There aren&amp;rsquo;t as many Galilei-like figures, at least not in relative terms. If 1/1000 of crazy-seeming scientists turned out to be geniuses in the 20th century, today the proportion might be more like 1/1 000 000. Why is that?&lt;/p>
&lt;p>To begin with, it seems as if &lt;a href="https://web.stanford.edu/~chadj/IdeaPF.pdf">good ideas are getting harder to find&lt;/a>. There are some ideas whose discovery appears almost inevitable because of the amount of readily accessible data. In contrast, much science today requires costly equipment and large-scale collaboration. Scott Alexander put it well in his essay about &lt;a href="https://www.lesswrong.com/posts/v7c47vjta3mavY3QC/is-science-slowing-down">science slowing down&lt;/a>:&lt;/p>
&lt;blockquote>
&lt;p>For example, element 117 was discovered by an international collaboration who got an unstable isotope of berkelium from the single accelerator in Tennessee capable of synthesizing it, shipped it to a nuclear reactor in Russia where it was attached to a titanium film, brought it to a particle accelerator in a different Russian city where it was bombarded with a custom-made exotic isotope of calcium, sent the resulting data to a global team of theorists, and eventually found a signature indicating that element 117 had existed for a few milliseconds. Meanwhile, the first modern element discovery, that of phosphorous in the 1670s, came from a guy looking at his own piss.&lt;/p>
&lt;/blockquote>
&lt;p>Early scientists picked the low-hanging fruit.&lt;/p>
&lt;p>What&amp;rsquo;s more, the average scientist today is more competent than the average scientist 100 years ago, since job selection has become more meritocratic. In the 18th and 19th centuries, science was largely run by aristocrats with too much spare time. These days, becoming a leading researcher in a STEM field requires brains. So if I&amp;rsquo;m disagreeing with a great number of brilliant people, it’s more likely that I&amp;rsquo;m the one being wrong.&lt;/p>
&lt;p>Finally, the industrial organisation of science might create a selection pressure against wild ideas. While most wild ideas are bad, brilliant ideas often seem wild too. Across almost all scientific disciplines, &lt;a href="https://www.researchgate.net/figure/A-Average-number-of-co-authors-per-paper-published-by-individual-scientists-during-the_fig1_297723759">researchers publish more papers now&lt;/a> than a hundred years ago. Someone optimising purely for research output probably can&amp;rsquo;t afford to explore as many wild ideas. If you&amp;rsquo;re expected to publish 4-5 papers per year, you&amp;rsquo;ll spend a significant amount of time on overhead work, leaving you with less time for open-ended exploration. In terms of Kuhn&amp;rsquo;s framework for scientific revolutions&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>, you&amp;rsquo;ll mostly be doing normal science.&lt;/p>
&lt;p>Science isn&amp;rsquo;t what it used to be. The fraction of martyrs of science today is minuscule. This doesn&amp;rsquo;t mean today&amp;rsquo;s scientists are any less intelligent&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>. Rather, it means that today&amp;rsquo;s scientific methodology works relatively well, and that brilliant ideas are the norm rather than the exception. If we&amp;rsquo;re right about the mad scientist just being mad, that&amp;rsquo;s a good thing.&lt;/p>
&lt;p>&lt;em>This post was inspired by a conversation with Miles Kodama.&lt;/em>&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>According to Thomas Kuhn, science goes something like this: Most of the time, scientists do normal science (experiment, analyse, repeat). But with time, they start noticing anomalies, results which cannot be explained by existing theories. Soon, there is a scientific crisis, leading to a scientific revolution. After the revolution, you&amp;rsquo;re back to normal science again. The people calling out their colleagues for being wrong should be prepared to be cancelled, at least for some time.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>According to Claude, the number of living physicists outstrips the number of having physicists having lived in the past, from ancient times until the present day. So there are likely many physicists of the same calibre as Einstein, Schrödinger and Bohr alive today.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/the-mad-scientist-is-probably-mad/</guid><pubDate>Sun, 19 Oct 2025 00:00:00 +0000</pubDate></item><item><title>High school for heroes</title><link>https://isabeldahlgren.github.io/high-school-for-heroes/</link><description>&lt;p>High school isn&amp;rsquo;t so much about preparing teenagers for adult life. In Sweden, you have a subject about cooking and personal finance, but only until secondary school. I think this makes sense: you only need one course on practical adult skills. After you know the basics, it&amp;rsquo;s all about practising IRL.&lt;/p>
&lt;p>But being a competent grown-up involves many other skills. It&amp;rsquo;s not just about successfully running a household &amp;ndash; it&amp;rsquo;s also a matter of &amp;ldquo;running yourself&amp;rdquo;. There are plenty of &amp;ldquo;serious&amp;rdquo; subjects we could fit in a standard high school curriculum. Here&amp;rsquo;s a vibe-based list:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Learning&lt;/strong>: Please please please, teach high schoolers about spaced repetition, interleaving and testing. The ability to learn things quickly is useful in any domain of life. In the future, this skill will probably become even more useful, as AI speeds up social and technological progress.&lt;/li>
&lt;li>&lt;strong>Rationality&lt;/strong>: If you want a serious-sounding name for this subject, call it behavioural economics or applied philosophy. Basic knowledge about cognitive biases and System 1 and System 2 isn&amp;rsquo;t just intellectual amusement &amp;ndash; it&amp;rsquo;s also very practical. If everyone learned about confirmation bias at an early age, would there be as much polarisation in the world?&lt;/li>
&lt;li>&lt;strong>Curiosity&lt;/strong>: We could also make high-schoolers read non-fiction or watch videos on popular science. Make them read or watch videos about the French revolution, the opioid crisis in the US, personal effectiveness, the holocaust, the nature of time, the Bauhaus school of design, John Houdini&amp;hellip; They could also organise seminars for each other, as in &lt;a href="https://fractalnyc.com/">Fractal&lt;/a>. Forcing students to memorise facts won&amp;rsquo;t produce life-long learners.&lt;/li>
&lt;li>&lt;strong>Statistics&lt;/strong>: Most people are forced to take a stats class at university, and for good reason. Statistics is one of the most useful academic disciplines, and it forms the backbone of all science. Maybe we could teach high schoolers about Bayesian inference, hypothesis testing and confidence intervals. This material can be made accessible to high schoolers: you can explain &lt;a href="https://www.youtube.com/watch?v=HZGCoVF3YvM">Bayes&amp;rsquo; rule&lt;/a> by means of rectangles and teach students simple algorithms for hypothesis testing and computing confidence intervals.&lt;/li>
&lt;li>&lt;strong>General AI&lt;/strong>: I almost feel obligated to add something directly AI-related. I think students should be taught the basics of machine learning: how gradient descent works, general transformer architecture, etc. This is much as students are taught about evolution and the different parts of the cell. I think this would empower people to reason more clearly about transformative AI&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>.&lt;/li>
&lt;li>&lt;strong>Wellbeing&lt;/strong>: You might also offer a course on wellbeing, incorporating elements of philosophy and the science of happiness.&lt;/li>
&lt;li>&lt;strong>Agency&lt;/strong>: We could also teach a class on on agency, by which I mean the art of &amp;ldquo;doing stuff&amp;rdquo;. Most high-schoolers have never experienced the joy of agency, the joy of poking at the world and noticing something happening. One could e.g. have a project-based class where high-schoolers get to found their own companies or orgs&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>.&lt;/li>
&lt;/ul>
&lt;p>I&amp;rsquo;m the most bullish about the benefits of taking classes on learning and rationality. I&amp;rsquo;m not as certain about the benefits of the remaining subjects. So, here&amp;rsquo;s one suggestion for a high-school curriculum: take a standard high school curriculum, make learning and rationality mandatory and offer the remaining subjects as electives. Overall, this seems like this kind of curriculum would produce more well-rounded, curious doers &amp;ndash; the kind of people who might solve the world&amp;rsquo;s most pressing problems.&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>You might also consider adding a modules on prompt engineering.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>Some Swedish high-schools offer such courses in entrepreneurship; Google &amp;ldquo;Ung företagsamhet&amp;rdquo;.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/high-school-for-heroes/</guid><pubDate>Sun, 12 Oct 2025 00:00:00 +0000</pubDate></item><item><title>A study on slack</title><link>https://isabeldahlgren.github.io/a-study-on-slack/</link><description>&lt;h3 id="i">I. &lt;a href="#i" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Once I was done with exams, it felt like getting my brain back. It&amp;rsquo;s not just that you spend most of your day studying difficult subjects, leaving you with less time and willpower for other tasks. You also have to do cost-benefit analysis all the time. All. The. Time. Sometimes, it&amp;rsquo;s relieving if someone tells you exactly what to do&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>.&lt;/p>
&lt;h3 id="ii">II. &lt;a href="#ii" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Slack is sometimes defined as having leftover resources. Intuitively, slack is like a buffer allowing you to absorb shocks better. We often speak of slack in the context of time and money. But to make things more visceral, imagine packing a bag. Having slack means having leftover space. There&amp;rsquo;s no need to stuff your shoes with underwear, fold your clothes meticulously or pack things in a given order. You don&amp;rsquo;t have to engage in trade-off thinking (&amp;ldquo;If I leave my camera, I can fit my notebook and an umbrella&amp;rdquo;). Having slack leads to a feeling of abundance.&lt;/p>
&lt;p>Slack is trendy nowadays. The concepts of &amp;ldquo;slow productivity&amp;rdquo;, &amp;ldquo;intentional work&amp;rdquo; or &amp;ldquo;anti-hustle culture&amp;rdquo; are essentially about having more slack. There has also been some recent research into the benefits of slack.&lt;/p>
&lt;p>Some companies have tried implementing four-day work weeks and measured productivity and employee wellbeing. The results? One &lt;a href="https://www.scientificamerican.com/article/biggest-trial-of-four-day-workweek-finds-workers-are-happier-and-feel-just/">survey paper&lt;/a> found that implementing four-day work weeks also made employees happier and healthier. This also agrees with my personal experience. I can only do good mathematics for at most 4-5 hours per day. After that, I need a long break.&lt;/p>
&lt;p>As stressed in &lt;a href="https://en.wikipedia.org/wiki/Scarcity:_Why_Having_Too_Little_Means_So_Much">Scarcity&lt;/a>, having slack also improves judgement. If you have little time or money, you tend to tunnel &amp;ndash; neglecting anything which isn&amp;rsquo;t immediately related to that resource (evolutionary baggage). There have also been numerous experiments showing that scarcity reduces mental bandwidth, i.e. intelligence and impulse control.&lt;/p>
&lt;p>Perhaps slack is necessary for being happy too. Chronic stress and financial insecurity have very negative consequences on wellbeing.&lt;/p>
&lt;h3 id="iii">III. &lt;a href="#iii" class="hash">#&lt;/a>&lt;/h3>
&lt;p>To create slack, just say no.&lt;/p>
&lt;p>Saying &amp;ldquo;no&amp;rdquo; is hard, though. Planning fallacy often leads us to take on too many tasks. While there are ways to overcome planning fallacy&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>, everything takes longer than expected by default. Even when you add a planning fallacy allowance. Moreover, saying &amp;ldquo;no&amp;rdquo; requires prioritising &amp;ndash; cognitive effort.&lt;/p>
&lt;p>To say &amp;ldquo;no&amp;rdquo;, remember that slack is the best thing ever. Here you can tap into your own quirks. For example, I often slip into a productivity-maxxximiser mindset, so I&amp;rsquo;ll try reminding myself that slack makes me more productive. I also have a few idols, and creating slack also seems consistent with my goal of becoming more like them.&lt;/p>
&lt;p>But I don&amp;rsquo;t just want to rely on mental habits. To create slack, I&amp;rsquo;ve set myself the goal of &amp;ldquo;working as an adult&amp;rdquo; rather than a student. That is, I&amp;rsquo;ll fix reasonable working hours, take Sundays off and treat breaks as a non-negotiables. These habits are all meant to protect slack. I also think digital hygiene helps create slack by making us more efficient. I&amp;rsquo;m wary of email and Slack checking, I keep my phone out of reach when studying and I mostly keep my phone on silent.&lt;/p>
&lt;p>I&amp;rsquo;ve also tried identifying early warning signs that I need more slack. For example, when I have little slack, my message respond time increases from my targeted 24h to around three days&lt;sup id="fnref:3">&lt;a href="#fn:3" class="footnote-ref" role="doc-noteref">3&lt;/a>&lt;/sup>. I also tend to cut back on my reading before going to bed and clean my room less frequently. Silly, I know. But these are surprisingly effective indicators of my stress levels.&lt;/p>
&lt;h3 id="iv">IV. &lt;a href="#iv" class="hash">#&lt;/a>&lt;/h3>
&lt;p>It feels good having one&amp;rsquo;s brain back. To do good work, you to be working at full capacity. This doesn&amp;rsquo;t mean overexerting yourself. Rather, it means having slack. Good work &amp;ndash; sustained efforts to solve hard problems effectively &amp;ndash; comes from the feeling of slack.&lt;/p>
&lt;p>&lt;em>This post was partly inspired by conversations with Yannick Muelhauser, Agatha Duzan and Hatem Khrouf.&lt;/em>&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>One of my friends thought this was one of the main benefits of doing military service for a year.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>A few ideas: ask a friend or Claude for a second opinion or recall how long similar tasks usually take.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:3">
&lt;p>If I don&amp;rsquo;t respond to a message within 24h, feel free to message me &amp;ldquo;You need more slack!&amp;rdquo;&amp;#160;&lt;a href="#fnref:3" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/a-study-on-slack/</guid><pubDate>Sun, 05 Oct 2025 00:00:00 +0000</pubDate></item><item><title>Analytic podcast-listening</title><link>https://isabeldahlgren.github.io/analytic-podcast-listening/</link><description>&lt;p>Nowadays, you can spend all your waking hours consuming valuable information. Pretty remarkable, when you come to think of it. Hint: AirPods Pro.&lt;/p>
&lt;p>Whenever you&amp;rsquo;re doing something not demanding your full attention, you can always turn on a podcast. You can listen to podcasts when commuting, working out or cooking&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>.&lt;/p>
&lt;p>Technically, you could have done something similar 50 years ago using a walkman. But it have been very high-effort. Today, we have dedicated podcast apps and noise-cancelling, wireless earplugs. As a result, the podcast industry is rapidly growing in size. If you sample 100 young adults from the street in an urban area, a substantial number of them might be listening to podcasts.&lt;/p>
&lt;h3 id="i">I. &lt;a href="#i" class="hash">#&lt;/a>&lt;/h3>
&lt;p>In school, they teach you analytic reading. But they don&amp;rsquo;t teach you analytic podcast-listening. By this, I mean podcast-listening to learn about new ideas. Deliberate listening, as opposed to just hearing.&lt;/p>
&lt;p>Analytic podcast-listening is very different from analytic reading. While I find it harder following the structure of an argument in podcast form, the podcast medium has other benefits:&lt;/p>
&lt;ul>
&lt;li>Podcasts are low-effort way of discovering new ideas. Because there are so many podcast hours in the day, you can afford to explore more novel content. I&amp;rsquo;m pickier when selecting books to read, since it&amp;rsquo;s a bigger investment.&lt;/li>
&lt;li>Podcasts can have a very high idea density. Because most podcasts are at most one hour, podcast guests are forced to focus on their key insights.&lt;/li>
&lt;li>Podcasts are great for stress testing ideas. Someone might make a convincing argument in a blog post but fail to defend their views in a podcast. Similarly, you might come away from a podcast feeling like a podcast guest is completely unreasonable. Suspicious!&lt;/li>
&lt;/ul>
&lt;p>So, it seems worth practising analytic podcast-listening. If you listen to podcasts 2h daily, this might make a huge difference.&lt;/p>
&lt;h3 id="ii">II. &lt;a href="#ii" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Before we get into strategies for analytic podcast-listening, I&amp;rsquo;d like to stress the need to prioritise. If you feed a neural network bad data, you won&amp;rsquo;t get good results &amp;ndash; no matter how efficient the training process. My podcast taste, in a nutshell:&lt;/p>
&lt;ul>
&lt;li>I mostly listen to podcasts with transcripts, so I can easily refer back to interesting parts of the conversation. For my favourite episodes, I&amp;rsquo;ll chuck the transcript into a large language model and have it summarise the key points. Ideally, the show notes also contain time stamps and further references. In this respect, &lt;a href="https://hearthisidea.com/">Hear This Idea&lt;/a> and the &lt;a href="https://80000hours.org/podcast/">80,000 hours podcast&lt;/a> do an excellent job.&lt;/li>
&lt;li>I find it quite valuable listening to podcasts with authors of books or articles I&amp;rsquo;ve read. These podcast episodes serve as good refreshers. This point also relates to the above point about stress testing ideas.&lt;/li>
&lt;li>I also like listening to blog posts in podcast form. While I don&amp;rsquo;t retain as much information as after reading the post, I usually get the gist. Above all, listening to blog posts helps me decide which posts to read later.&lt;/li>
&lt;/ul>
&lt;h3 id="iii">III. &lt;a href="#iii" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Here are some strategies for analytic podcast-listening&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>:&lt;/p>
&lt;ul>
&lt;li>Have some system for saving podcasts that resonated with you. Re-listen to your favourite episodes, much as you&amp;rsquo;d reread your favourite books. Here&amp;rsquo;s my &lt;a href="https://open.spotify.com/playlist/77yvdL2OH9bKwit9SNECGN?si=9b726b7d35174fb3">podcast shelf&lt;/a>.&lt;/li>
&lt;li>Similarly, keep an anti-library for podcasts. I have a list of podcasts recommended by others.&lt;/li>
&lt;li>Create voice memos to capture your thoughts. If I&amp;rsquo;m on the go, I might dictate thoughts and have an LLM convert my yapping into bullet points later.&lt;/li>
&lt;li>There are dedicated podcast apps which allow you to capture segments of podcasts. I&amp;rsquo;ve played around with some of these, and I&amp;rsquo;m quite impressed by how well they work.&lt;/li>
&lt;li>Work with playback speed. Here it&amp;rsquo;s useful to think in terms of exploration and exploitation. If it&amp;rsquo;s a high-quality episode, I&amp;rsquo;ll usually listen at a 0.8x speed. While listening to podcasts, I&amp;rsquo;m usually multitasking, so I need additional processing time. If I&amp;rsquo;m listening to a new show, I&amp;rsquo;ll turn up the speed to 2x.&lt;/li>
&lt;li>If you have a smartwatch, put it to good use! The most valuable feature of my Apple Watch has been the ability to rewind podcast episodes by 15s.&lt;/li>
&lt;/ul>
&lt;h3 id="iv">IV. &lt;a href="#iv" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Podcasts play a big role in my process for updating beliefs. Podcasts, along with conversations with others, are my main methods for &amp;ldquo;gaining surface area&amp;rdquo;.&lt;/p>
&lt;p>I only began thinking in terms of analytic podcast-listening quite recently. Ultimately, I&amp;rsquo;m realising that it&amp;rsquo;s &amp;ldquo;just&amp;rdquo; a matter of being more intentional about the content you consume. But it probably helps having a funky name for it.&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>Listening to podcasts at all times is a bad idea, though.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>If you have further ideas here, please do reach out.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/analytic-podcast-listening/</guid><pubDate>Sun, 28 Sep 2025 00:00:00 +0000</pubDate></item><item><title>AI safety lingo</title><link>https://isabeldahlgren.github.io/ai-safety-lingo/</link><description>&lt;p>There&amp;rsquo;s a lot of jargon within AI safety. Here are analogies for 20 AI safety terms. I assume some familiarity with these terms; I&amp;rsquo;ll omit the exact definitions. I&amp;rsquo;ll give references to appropriate resources rather than trying and failing to define these concepts precisely in a couple of lines. Instead, I&amp;rsquo;ll focus on intuitions.&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://www.lesswrong.com/posts/p7x32SEt43ZMC9r7r/embedded-agents">Embedded agency&lt;/a>: Playing the Sims is very different from living IRL. When playing a video game, you&amp;rsquo;re not an agent embedded in the game environment.&lt;/li>
&lt;li>&lt;a href="https://www.lesswrong.com/posts/FkgsxrGf3QxhfLWHG/risks-from-learned-optimization-introduction">Base optimizer vs. mesa-optimizer&lt;/a>: A base optimizer is a process for achieving some goal (cooking a good risotto). The base optimizer (the chef) soon learns that the seasoning makes a huge difference. A process for perfecting the seasoning is an example of a mesa-optimiser: a process for achieving a learned subgoal.&lt;/li>
&lt;li>&lt;a href="https://www.lesswrong.com/posts/poyshiMEhJsAuifKt/outer-vs-inner-misalignment-three-framings-1">Inner alignment vs. outer alignment&lt;/a>: If a government wants to reduce unemployment, it has to design efficient regulations and and ensure citizens comply. Outer alignment is the problem of specifying the right incentive structure; inner alignment the problem of compliance.&lt;/li>
&lt;li>&lt;a href="https://www.lesswrong.com/w/deceptive-alignment">Deceptive alignment&lt;/a>: This is much like an intelligent bully will pretend being nice when the grown-ups are watching. A misaligned AI system might benefit from appearing more aligned.&lt;/li>
&lt;li>&lt;a href="https://www.lesswrong.com/w/treacherous-turn">Treacherous turn&lt;/a>: The moment when the opposition seizes power through a coup. The hypothetical moment when a misaligned, highly capable AI decides to&amp;hellip; I don&amp;rsquo;t know, but people imagine something bad.&lt;/li>
&lt;li>&lt;a href="https://www.lesswrong.com/w/corrigibility-1">Corrigibility&lt;/a>: Ever been in the library when someone&amp;rsquo;s phone continues ringing, despite their best efforts to silence it? They might try putting the phone on silent, then turning the volume to zero, then turning it off. Maybe their phone seems to be frozen or something? In this case, we&amp;rsquo;d speak of a non-corrigible phone: a phone that resists attempts to &amp;ldquo;correct&amp;rdquo; its behaviour and resists attempts to be shut down.&lt;/li>
&lt;li>&lt;a href="https://arxiv.org/abs/2210.01790">Goal misgeneralization&lt;/a>: A pianist might practise Bach to please her pianist friends; however, when she&amp;rsquo;s at normal parties, people just want her to play &lt;em>Let it Be&lt;/em>. Likewise, an AI system might competently pursue one goal which leads to good performance in training situations but poorly in novel test situations.&lt;/li>
&lt;li>&lt;a href="https://www.lesswrong.com/w/instrumental-convergence">Edge instantiation&lt;/a>: An AI agent instructed to fill a cauldron of water might flood the entire room. Task accomplished, technically. And for AI agents, only the technicalities matter. AIs can be annoyingly creative.&lt;/li>
&lt;li>&lt;a href="https://en.wikipedia.org/wiki/Goodhart%27s_law">Goodhart&amp;rsquo;s law&lt;/a>: When a measure becomes a target, it ceases to be a good measure&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>. The phenomenon of studying for the test is an example of Goodhart&amp;rsquo;s law.&lt;/li>
&lt;li>&lt;a href="https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc">Value learning&lt;/a>: This is the broad project of teaching AIs human values. AI-rearing, essentially.&lt;/li>
&lt;li>&lt;a href="https://www.youtube.com/watch?v=qo355ALvLRI">Inverse reinforcement learning (IRL)&lt;/a>: If you&amp;rsquo;re trying to schedule a time to meet with a passive aggressive friend, you have to infer their preferences based on their wordings and emoji usage. Inverse reinforcement learning is an ML approach for inferring preferences of AI systems.&lt;/li>
&lt;li>&lt;a href="https://lilianweng.github.io/posts/2024-11-28-reward-hacking/">Reward hacking/specification gaming&lt;/a>: Or, finding legal loopholes.&lt;/li>
&lt;li>&lt;a href="https://www.lesswrong.com/w/instrumental-convergence">Instrumental convergence&lt;/a>: Great minds think alike. In particular, great minds with different goals might pursue similar subgoals. For example, two high school students wanting to become an aerospace engineer and a medical doctor respectively might infer that they should get university degrees first.&lt;/li>
&lt;li>&lt;a href="https://en.wikipedia.org/wiki/Statistical_model_specification">Model misspecification&lt;/a>: If you think the colour of Alice&amp;rsquo;s shirt determines whether she&amp;rsquo;ll win over Bob in a game of pingpong, your model is misspecified. You&amp;rsquo;re making the wrong assumptions about the data generation mechanism.&lt;/li>
&lt;li>&lt;a href="https://www.lesswrong.com/posts/sAJnZY8pp2W3DR4mx/breaking-down-the-training-deployment-dichotomy">Training distribution vs. deployment distribution&lt;/a>: Regardless of how much a soccer player practises taking penalty kicks, she&amp;rsquo;ll find it different taking a penalty kick in a real match. You cannot perfectly simulate the test conditions.&lt;/li>
&lt;li>&lt;a href="https://www.lesswrong.com/w/distributional-shifts">Distributional shift&lt;/a>: The shift from training conditions to a real match is a distributional shift.&lt;/li>
&lt;li>&lt;a href="https://www.lesswrong.com/posts/k54rgSg7GcjtXnMHX/model-splintering-moving-from-one-imperfect-model-to-another-1">Reward model splintering&lt;/a>: A strategy can fail when you switch to a more general setting. Student insider jokes won&amp;rsquo;t work on the average person on the street.&lt;/li>
&lt;li>&lt;a href="https://www-cdn.anthropic.com/7512771452629584566b6303311496c262da1006/Anthropic_ConstitutionalAI_v2.pdf">Constitutional AI (CAI)&lt;/a>: Tell, don&amp;rsquo;t show. Rather than showing kids examples of good and bad behaviour, tell them which ethical principles to follow&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>. The idea behind constitutional AI is to give AIs a &amp;ldquo;constitution&amp;rdquo;.&lt;/li>
&lt;li>&lt;a href="https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback">Human feedback (RLHF)&lt;/a>: A process for producing ideal leaders. The ideal leader studies people&amp;rsquo;s opinions closely, tries inferring principles explaining the data, and aspires to act according to these principles.&lt;/li>
&lt;li>&lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-13">Proxy gaming&lt;/a>: Social media companies take the time a user spends on their platform as a proxy for the quality of the content recommended. Thus the recommender algorithms might favour polarising content. The proxy game &amp;ndash; that of recommending addictive content &amp;ndash; has been gamed.&lt;/li>
&lt;li>&lt;a href="https://www.lesswrong.com/w/value-drift">Value drift&lt;/a>: Values of individuals and communities change over time. Nowadays, almost everyone thinks slavery is indefensible. Similarly, the values implicit in an AI model might change as it accumulates more memory.&lt;/li>
&lt;/ul>
&lt;p>&lt;em>Thanks to Atharva Nihalani for inspiring me to write this post.&lt;/em>&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>Yet another reason that &lt;a href="https://isabeldahlgren.github.io/policy-making-is-complicated/">policy is complicated&lt;/a>.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>Maybe something like &lt;a href="https://www.poetryfoundation.org/poems/46473/if---">If&lt;/a>.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/ai-safety-lingo/</guid><pubDate>Sun, 21 Sep 2025 00:00:00 +0000</pubDate></item><item><title>AI is not AI is not AI</title><link>https://isabeldahlgren.github.io/ai-is-not-ai-is-not-ai/</link><description>&lt;p>There are plenty of misnomers in science and mathematics. Atoms aren&amp;rsquo;t indivisible. Hubble&amp;rsquo;s constant isn&amp;rsquo;t a constant. And in 9/10 cases, X&amp;rsquo;s theorem was usually first discovered by someone else (in 4/10 cases, it was discovered by Gauss). Another bad piece of terminology, according to some: &amp;ldquo;artificial intelligence&amp;rdquo; or AI.&lt;/p>
&lt;p>Given that we don&amp;rsquo;t have a good definition of human intelligence, the term &amp;ldquo;artificial intelligence&amp;rdquo; is inherently vague. Because AI sounds cool, people use the term quite liberally. Logistic regression in Excel? AI! But it&amp;rsquo;s unclear what qualifies as &amp;ldquo;intelligent enough&amp;rdquo;. As AI systems become more capable, we seem to raise the bar. Previously, calculators and spell checkers were considered artificial intelligence.&lt;/p>
&lt;p>When speaking of artificial general intelligence, people usually specify exactly what they mean. For example, people might refer to powerful/strong AI, &lt;a href="https://www.darioamodei.com/essay/machines-of-loving-grace#fn:3">AI with expert-level science and engineering capabilities&lt;/a> or &lt;a href="https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/">Process for Automating Scientific and Technological Advancement (PASTA)&lt;/a>. But sometimes it wouldn&amp;rsquo;t hurt being more specific when speaking about narrow forms of AI either. Indeed, consider the following thought experiment from &lt;a href="https://en.wikipedia.org/wiki/AI_Snake_Oil">AI Snake Oil&lt;/a>:&lt;/p>
&lt;blockquote>
&lt;p>Imagine an alternate universe in which people don’t have words for different forms of transportation, only the collective noun “vehicle.” They use that word to refer to cars, buses, bikes, spacecraft, and all other ways of getting from place A to place B. Conversations in this world are confusing. There are furious debates about whether or not vehicles are “environmentally friendly,” but (even though no one realizes it) one side of the debate is talking about bikes and the other side about trucks. There is a breakthrough in rocketry, but when the media focuses on how vehicles have gotten faster, people call their car dealer (oops, vehicle dealer) to ask when faster models will be available. Meanwhile, fraudsters have capitalized on the fact that consumers don’t know what to believe when it comes to vehicle technology, so scams are rampant in the vehicles sector.&lt;/p>
&lt;p>Now replace the word “vehicles” with “artificial intelligence,” and we have a pretty good description of the world we live in.&lt;/p>
&lt;/blockquote>
&lt;p>So it&amp;rsquo;s helpful having vocabulary for the different kinds of AI. For example, you might differentiate between &lt;a href="https://www.microsoft.com/en-us/ai/ai-101/generative-ai-vs-other-types-of-ai">generative, predictive and conversational AI&lt;/a>. Another distinction is between large language models and reinforcement learning agents.&lt;/p>
&lt;p>Not to be a pedant, but sometimes these distinctions matter a lot. For example, agentic AI is much more likely to pose a safety risk than non-agentic AI.&lt;/p>
&lt;p>However, it&amp;rsquo;s annoying adding a long descriptor before the word &amp;ldquo;AI&amp;rdquo; in everyday conversation. Furthermore, many AI systems fall under multiple categories; for example, Claude is both a language model and a reinforcement learning agent. Finally, we need a generic term for referring to the capability of a machine to simulate intelligent behaviour, and the term &amp;ldquo;artificial intelligence&amp;rdquo; does the job.&lt;/p>
&lt;p>Interestingly, this choice of term was a conscious decision. AI emerged as its own field of study after a workshop in 1956. In their workshop proposal, the organisers introduced the term &amp;ldquo;artificial intelligence&amp;rdquo;. They were also considering the name &amp;ldquo;automata studies&amp;rdquo;. Maybe more appropriate, but pretty lame.&lt;/p></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/ai-is-not-ai-is-not-ai/</guid><pubDate>Sun, 14 Sep 2025 00:00:00 +0000</pubDate></item><item><title>Policy-making is complicated</title><link>https://isabeldahlgren.github.io/policy-making-is-complicated/</link><description>&lt;p>Things are more complicated than they first seem, especially when it comes to policy-making. Should Germany build nuclear power plants to reduce their carbon emissions? Should there be global standards for content moderation on social media platforms? How should we best mitigate the risk of future global pandemics?&lt;/p>
&lt;p>I recently had the pleasure of getting to know more people working in policy and forecasting, and I quickly realised that I hadn&amp;rsquo;t internalised the complexity of the problems they face&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>. Here, I&amp;rsquo;ll list some intuition pumps for the difficulty of policy-related questions. For concreteness, I&amp;rsquo;ll focus on the first of the three above topics, nuclear power in Germany.&lt;/p>
&lt;ul>
&lt;li>&lt;strong>More than a wing flap&lt;/strong>: Recall the butterfly effect: a butterfly&amp;rsquo;s wing flap can cause a tornado elsewhere. How about the choice-of-energy-supply effect?&lt;/li>
&lt;li>&lt;strong>Defining justice&lt;/strong>: Are there conflicting interests? Then we have to define justice first. Wonderful. This would involve solving a big part of political philosophy.&lt;/li>
&lt;li>&lt;strong>Multilayer forecasting&lt;/strong>: Answering this question involves forecasting the effects of two hypothetical scenarios (the &amp;ldquo;yes&amp;rdquo; and &amp;ldquo;no&amp;rdquo; scenarios). But forecasting is basically a full-time job. Forecasters need to identify relevant parameters, estimate them, steelman opposing views, and so on.&lt;/li>
&lt;li>&lt;strong>Lazy student&lt;/strong>: You could easily write a master thesis on this topic. A master thesis is 30 ETCS. Feel the credits!&lt;/li>
&lt;li>&lt;strong>Headcount&lt;/strong>: There&amp;rsquo;s probably a good number of people working on exactly on this issue. Sum over people in special commissions, the government, in academia, in think tanks, and so on.&lt;/li>
&lt;li>&lt;strong>Hard optimisation&lt;/strong>: The ideal energy source emits zero carbon, works all year round, is cheap, doesn&amp;rsquo;t produce toxic nuclear waste, etc. This leads to an infinite-dimensional optimisation problem. Worse, the loss functions need not be smooth and differentiable.&lt;/li>
&lt;li>&lt;strong>Beyond EXPSpace&lt;/strong>: If the travelling salesman problem is NP-hard, then what on earth is this optimisation problem even?&lt;/li>
&lt;li>&lt;strong>Non-computable&lt;/strong>: If the halting problem is uncomputable, is there any hope of answering this question?&lt;/li>
&lt;li>&lt;strong>Inference time&lt;/strong>: Imagine prompting a reasoning language model, say GPT-5, to write a nuanced 100-page report on the question. What would be the inference time?&lt;/li>
&lt;li>&lt;strong>Going broke&lt;/strong>: Alternative formulation, so you really feel it: Imagine prompting a reasoning language model, say GPT-5, to write a nuanced 100-page report on the question. Suppose you were billed on a per-token basis. What would be the price?&lt;/li>
&lt;/ul>
&lt;p>Well, policy is complicated. What does this mean in practise?&lt;/p>
&lt;p>It seems important to distinguish between &amp;ldquo;thinking for fun&amp;rdquo; and &amp;ldquo;thinking for truth&amp;rdquo;. I enjoy thinking about hard problems &amp;ndash; that&amp;rsquo;s one of the main reasons I do mathematics. But if I&amp;rsquo;m thinking for truth in a domain outside my expertise&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>, I probably shouldn&amp;rsquo;t spend more than 1% of my time coming up with a prior. It will be very weak, anyway. Things are complicated, remember?&lt;/p>
&lt;p>Similarly, I&amp;rsquo;ll try outsourcing as much as possible to experts, resisting the urge to &amp;ldquo;attempt every problem&amp;rdquo;. My job should reduce to aggregating expert opinions. Ideally, I could ask an LLM to fill in a table with expert opinions and their respective confidence levels, so I could just take a weighted average.&lt;/p>
&lt;p>In a very backward way, internalising that things are complicated makes my life much simpler.&lt;/p>
&lt;p>&lt;em>This post was inspired by conversations with Caroline Falkman Olsson and Nadja Flechner. Thanks for the food for thought.&lt;/em>&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>I expected easy answers to hard questions.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>Right now, that&amp;rsquo;s just about any domain.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/policy-making-is-complicated/</guid><pubDate>Sun, 07 Sep 2025 00:00:00 +0000</pubDate></item><item><title>From the AI company storybook</title><link>https://isabeldahlgren.github.io/from-the-ai-company-storybook/</link><description>&lt;p>AI companies are companies. The leading AI companies don&amp;rsquo;t want to be seen as companies, though. They call themselves AI labs. They have researchy names, like DeepMind and Meta AI. The best name: OpenAI. Almost sounds like a real &lt;a href="https://openai.com/index/introducing-openai/">non-profit&lt;/a>.&lt;/p>
&lt;p>Most AI companies are products of Silicon Valley. Their leaders aren&amp;rsquo;t professors, but seasoned business executives. And this is a good thing. These companies wouldn&amp;rsquo;t produce nearly as much consumer value if they were led by researchers with no industry experience. Moreover, unlike normal research labs, AI companies need to make money, just as any other company.&lt;/p>
&lt;p>All companies tell narratives about themselves. And the leading AI companies are pretty good at it &amp;ndash; these companies attract top talent, so their marketing teams can basically hire the world&amp;rsquo;s best story-tellers. Here are five narratives being aware of.&lt;/p>
&lt;p>Before we begin, an apology. I&amp;rsquo;m deliberately exaggerating these narratives, just to make them clearer&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>. Concretely, I&amp;rsquo;ll be cherry-picking quotes from company leaders and generalise, like a lot. Overall, I think these companies have the potential to radically improve the future. But the focus of this article is on the problems of these narratives.&lt;/p>
&lt;p>Now, story time.&lt;/p>
&lt;h3 id="well-build-agi-soon">&amp;ldquo;We&amp;rsquo;ll build AGI soon&amp;rdquo; &lt;a href="#well-build-agi-soon" class="hash">#&lt;/a>&lt;/h3>
&lt;p>AI company CEOs have famously &lt;a href="https://80000hours.org/agi/guide/when-will-agi-arrive/">short AI timelines&lt;/a>. For example, in January this year, Sam Altman declared that &amp;ldquo;We are now confident we know how to build AGI as we have traditionally understood it.&amp;rdquo; Dario Amodei, in the same month: &amp;ldquo;I’m more confident than I’ve ever been that we’re close to powerful capabilities… in the next 2-3 years&amp;rdquo;.&lt;/p>
&lt;p>Of course, no one would invest in your company if you needed two decades to develop your product.&lt;/p>
&lt;p>But the term AGI and is fuzzy&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>, and it keeps on changing as models evolve. Funny because it&amp;rsquo;s kinda true: &amp;ldquo;AGI is whatever machines can&amp;rsquo;t do now&amp;rdquo;. In the end, these kinds of statements about future model capabilities become pretty uninformative.&lt;/p>
&lt;h3 id="agi-is-inevitable">&amp;ldquo;AGI is inevitable&amp;rdquo; &lt;a href="#agi-is-inevitable" class="hash">#&lt;/a>&lt;/h3>
&lt;p>The AI companies also want technological progress to seem inevitable. If you cannot guarantee AGI, why would you fund them?&lt;/p>
&lt;p>But this narrative isn&amp;rsquo;t just for outsiders. Ilya Sutskever, former chief scientist at OpenAI, used to chant &amp;ldquo;Feel the AGI&amp;rdquo; at company parties. Was this supposed to improve company culture&lt;sup id="fnref:3">&lt;a href="#fn:3" class="footnote-ref" role="doc-noteref">3&lt;/a>&lt;/sup>?&lt;/p>
&lt;p>Similarly, if AGI is inevitable, an AI company can justify advancing model capabilities as quickly as possible &amp;ndash; possibly at the cost of safety &amp;ndash; so that none of the bad guys build it first.&lt;/p>
&lt;h3 id="were-the-good-guys">&amp;ldquo;We&amp;rsquo;re the good guys&amp;rdquo; &lt;a href="#were-the-good-guys" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Indeed, as highlighted in &lt;a href="https://en.wikipedia.org/wiki/Empire_of_AI">Empire of AI&lt;/a>, some AI companies seem to think of themselves as &amp;ldquo;the good guys&amp;rdquo; and other AI companies as &amp;ldquo;the bad guys&amp;rdquo;. For example, OpenAI was founded was because Musk was concerned about the ethical implications of Google&amp;rsquo;s acquisition of DeepMind.&lt;/p>
&lt;p>Moreover, the US-China tensions lead to a divide between the American AI companies and the Chinese ones.&lt;/p>
&lt;h3 id="our-tools-empower-all-of-humanity">&amp;ldquo;Our tools empower all of humanity&amp;rdquo; &lt;a href="#our-tools-empower-all-of-humanity" class="hash">#&lt;/a>&lt;/h3>
&lt;p>DeepMind&amp;rsquo;s mission is to &amp;ldquo;build AI responsibly to benefit humanity&amp;rdquo;, while OpenAI&amp;rsquo;s mission is to &amp;ldquo;ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity&amp;rdquo;.&lt;/p>
&lt;p>The democratisation bit of the mission isn&amp;rsquo;t specific to AI companies; it&amp;rsquo;s a tech company cliché. For example, Facebook is supposed to give everyone the opportunity to connect you with friends and the rest of the world.&lt;/p>
&lt;p>AI will empower all of humanity, but only if everyone can afford a laptop, has stable internet connection and knows how to use LLMs efficiently. Moreover, prices for premium subscription plans are relatively high. OpenAI&amp;rsquo;s cheapest paid plan comes at 20 dollars per month. For Anthropic, the figure is 17 dollars. There&amp;rsquo;s an economic reality too.&lt;/p>
&lt;p>The point about using LLMs efficiently is subtle. To have an LLM perform more advanced tasks, which is what would be empowering for real, you&amp;rsquo;d need to be good at prompt engineering. Becoming good at prompt engineer takes time.&lt;/p>
&lt;h3 id="agi-will-solve-all-our-problems">&amp;ldquo;AGI will solve all our problems&amp;rdquo; &lt;a href="#agi-will-solve-all-our-problems" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Another reason some companies need to build AGI as soon as possible is their belief that AGI can solve most of humanity&amp;rsquo;s problems. In short, techno-optimism. In &lt;a href="https://www.penguin.co.uk/books/437020/genius-makers-by-metz-cade/9781847942159">Genius Makers&lt;/a>, Cade Metz likens the belief in AGI to a religion. And a religion needs a promise of salvation.&lt;/p>
&lt;p>To be fair, I&amp;rsquo;m also quite excited about the idea of AGI accelerating scientific progress. For example, &lt;a href="https://www.forethought.org/research/preparing-for-the-intelligence-explosion#the-accelerated-decade">this report&lt;/a> from Forethought argues that we might very well see a century&amp;rsquo;s worth of technological progress in a decade. However, we also need to reduce the risks from &lt;a href="https://www.openphilanthropy.org/research/some-background-on-our-views-regarding-advanced-artificial-intelligence/#id-1-defining-transformative-artificial-intelligence-transformative-ai">transformative AI&lt;/a>. AGI might solve all our problems, but only if we can overcome &lt;a href="https://www.forethought.org/research/preparing-for-the-intelligence-explosion#4-grand-challenges">some of the challenges&lt;/a> associated with transformative technology.&lt;/p>
&lt;h3 id="the-end">The end &lt;a href="#the-end" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Whom are these stories meant for?&lt;/p>
&lt;p>In short, for everyone. Some stories serve as sales pitches for investors and customers. Others are meant to improve employee morale and create a sense of unity. Regulators learn that they shouldn&amp;rsquo;t regulate the deployment of new AI models, just so their nation won&amp;rsquo;t fall behind in the AI arms race. These stories are also meant for the future. The people who help build benign AGI want to be remembered as pioneers.&lt;/p>
&lt;p>Companies like telling stories. As do individuals. Most people in AI safety, including myself, like telling themselves that they might help AI &amp;ldquo;go well&amp;rdquo; somehow. It&amp;rsquo;s just worth being aware of the narratives we tell ourselves.&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>It&amp;rsquo;s a bit like using the extremal principle from mathematics.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>Everyone seems to agree the term &amp;ldquo;AGI&amp;rdquo; is problematic, even the AI company CEOs. See e.g. &lt;a href="https://www.darioamodei.com/essay/machines-of-loving-grace#basic-assumptions-and-framework">here&lt;/a>.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:3">
&lt;p>As an outsider, I cannot help but think that this damages company culture, though.&amp;#160;&lt;a href="#fnref:3" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/from-the-ai-company-storybook/</guid><pubDate>Sun, 31 Aug 2025 00:00:00 +0000</pubDate></item><item><title>Nerd trying to adapt to an AI economy</title><link>https://isabeldahlgren.github.io/nerd-trying-to-adapt-to-an-ai-economy/</link><description>&lt;h3 id="i">I. &lt;a href="#i" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Part of the reason I decided to study maths was because it seemed like the most useful subject. If I knew ML systems would never get better than GPT-2 at maths, I&amp;rsquo;d probably be of the same opinion today. But today&amp;rsquo;s state-of-the-art ML systems are far better than GPT-2. LLMs have excelled at maths and programming because maths- and coding-related tasks admit quick feedback, allowing for efficient reinforcement learning. Jobs involving applying maths and programming jobs could theoretically be automated within a few decades. The glory days of the nerd might be over soon.&lt;/p>
&lt;p>I&amp;rsquo;m not going to speculate in the details of how such a transformation might occur; I couldn&amp;rsquo;t be bothered doing the forecasting due diligence here. Instead, I want to focus on future-proof jobs and skills.&lt;/p>
&lt;p>There&amp;rsquo;s no shortage of material on this topic&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>, so I&amp;rsquo;ll start by re-hashing some common beliefs. &lt;a href="https://80000hours.org/agi/guide/skills-ai-makes-valuable/">This article&lt;/a> is a good starting point.&lt;/p>
&lt;p>Current AI systems are bad at complex physical tasks and long-horizon tasks, and rapid improvements in any of these areas seem unlikely. So, we might expect human plumbers and wedding planners to be around for some time&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>. There&amp;rsquo;s also a broad class of jobs all about meaningful human interactions. These jobs might be difficult to automate by virtue of the machine being a machine. For example, consider priests. Some people argue that healthcare workers also fall into this category of jobs, but I think might could change; for example, people are already getting more used to the idea of LLM therapists.&lt;/p>
&lt;p>There&amp;rsquo;s also widespread agreement that leadership skills and personal effectiveness will become increasingly important. As will creativity, the ability to learn quickly and to use to AIs.&lt;/p>
&lt;h3 id="ii">II. &lt;a href="#ii" class="hash">#&lt;/a>&lt;/h3>
&lt;p>So many skills, so little time! But I&amp;rsquo;ll argue there&amp;rsquo;s one skill that&amp;rsquo;s vastly more important than those mentioned above: verbal communication skills. By this, I mean the ability to deliver good talks as well as the ability to articulate your ideas clearly in debates and normal conversations. A good conversationalist might have a huge comparative advantage in an AI-dominated economy.&lt;/p>
&lt;p>To a large extent, LLMs can help with the other skills mentioned above. A manager can use LLMs to sanity check their decisions, to write better project proposals and even to manage intrapersonal relations. As for personal effectiveness, one of the five buttons in the Claude menu is &amp;ldquo;Life stuff&amp;rdquo;; clicking it displays five prompts related to getting things done. It&amp;rsquo;s also obvious that AIs make us more efficient learners. Finally, AIs can help you use AIs more efficiently: just have another LLM refine your prompt.&lt;/p>
&lt;p>How about verbal communication skills?&lt;/p>
&lt;p>LLMs can help writing scripts for talks. But that&amp;rsquo;s about it. The LLM can&amp;rsquo;t help you deliver the talk. This involves adopting the right pace, using the right intonation, reading off the audience, and so on. Good orators are rare! In spontaneous conversations or debates, you&amp;rsquo;re basically on your own. You can use an LLM to refine your ideas in advance, but you can only prepare so much.&lt;/p>
&lt;p>People who can communicate clearly in speech seem much more credible, at least to me&lt;sup id="fnref:3">&lt;a href="#fn:3" class="footnote-ref" role="doc-noteref">3&lt;/a>&lt;/sup>. Suppose I write a persuasive article but fail to explain its contents to you. Then you might feel somewhat disappointed, thinking that all the good points of the article were all due to an LLM. In fact, had I written a mediocre article, I might have appeared more credible overall.&lt;/p>
&lt;p>Even if we ignore the impact of AI on the job market, the ability to express one&amp;rsquo;s ideas clearly in speech is essential, regardless of what job you have. Becoming a better speaker also has positive spill-over effects. You&amp;rsquo;ll certainly have more &lt;a href="https://isabeldahlgren.github.io/on-good-conversations/">good conversations&lt;/a> with the people you care about.&lt;/p>
&lt;p>Even if we tomorrow develop AI capable of automating all remote work, integrating these AIs into the workforce would take time &amp;ndash; just try imagining the amount of bureaucracy! For this reason, I&amp;rsquo;m not in a rush to develop verbal communication skills. Rather than doing rhetoric MOOCs and buying books on persuation with cringe titles, how about just having coffee with people with shared interests more often?&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>See &lt;a href="https://arxiv.org/pdf/2507.07935">here&lt;/a>, &lt;a href="https://www.weforum.org/stories/2025/08/ai-jobs-replacement-data-careers/">here&lt;/a> and &lt;a href="https://www.weforum.org/stories/2025/04/ai-jobs-international-workers-day/">here&lt;/a>.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>A recent &lt;a href="https://arxiv.org/pdf/2507.07935">report from Microsoft&lt;/a> argues that the three jobs least likely to be replaced by AI are dredge operators, bridge and lock tenders and water treatment plant and system operators.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:3">
&lt;p>I elaborated on this in my post on &lt;a href="https://isabeldahlgren.github.io/finding-research-influences/">research influences&lt;/a>; see the section on having a nose for bullshit.&amp;#160;&lt;a href="#fnref:3" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/nerd-trying-to-adapt-to-an-ai-economy/</guid><pubDate>Sun, 24 Aug 2025 00:00:00 +0000</pubDate></item><item><title>Man's search for fun</title><link>https://isabeldahlgren.github.io/mans-search-for-fun/</link><description>&lt;p>Willpower is a scarce resource. I can only exert so much willpower in a day&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>. Alas, many important tasks require lot of willpower. Examples of such tasks include filing one&amp;rsquo;s taxes, memorising messy proofs and booking flights. Oftentimes, I&amp;rsquo;m bottlenecked by willpower rather than time. Rather than increasing willpower - finding Meaning - how about decreasing the amount of willpower required for a given task? This might be an easier problem. Here are my favourite ways of making boring tasks fun.&lt;/p>
&lt;p>First, the usual preface to any advice-style post: take care of yourself. Get enough sleep, for example. If you have low baseline energy levels, nothing is fun. Now, on to the interesting parts.&lt;/p>
&lt;h3 id="autonomy">Autonomy &lt;a href="#autonomy" class="hash">#&lt;/a>&lt;/h3>
&lt;p>In my experience, autonomy is the most important factor for making things fun. You need to feel some degree of ownership of the project, or else it just feels like obeying orders. If you&amp;rsquo;re doing a group project, take responsibility for a subtask. If you&amp;rsquo;re a student, feel free to skip lectures if they don&amp;rsquo;t seem valuable. If you&amp;rsquo;re trying to learn new material, pick your learning resources yourself.&lt;/p>
&lt;p>Don&amp;rsquo;t be afraid to try your own, new learning methods - this can strengthen your sense of autonomy. Additionally, this might lead to the discovery of more efficient learning methods, allowing you to escape local optima.&lt;/p>
&lt;h3 id="teamwork">Teamwork &lt;a href="#teamwork" class="hash">#&lt;/a>&lt;/h3>
&lt;p>I cannot understate the value of teamwork. When starting university, most students are terrible at working in teams. Indeed, there are many common failure modes for efficient teamwork: meeting too irregularly, having different levels of ambition, doing unstructured meetings, to name a few. But you have a lot to gain from teaming up with the right people and getting into a good team workflow.&lt;/p>
&lt;p>For example, during exam session, I&amp;rsquo;d have weekly appointments with friends&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>. For each appointment, we&amp;rsquo;d have a fixed agenda, like going over a given problem sheet or reproducing proofs from a chapter of the lecture notes. Besides the &amp;ldquo;obvious&amp;rdquo; learning benefits (exposing blind spots, easy sanity checking, faster feedback, accountability and so on), the social component made revision much more fun. The study sessions, although goal-oriented, felt much like catching up with a friend. Moreover, whenever I could help others, I also found this quite rewarding.&lt;/p>
&lt;p>I find it easier discussing maths with one other person, so I&amp;rsquo;d have one main study buddy per subject. This also facilitated coordination, making it easier to establish a habit.&lt;/p>
&lt;h3 id="slack">Slack &lt;a href="#slack" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Don&amp;rsquo;t underestimate the power of slack. Slack refers to spare capacity, usually in the context of time or money. Eighty-twentying some tasks can be fun too, but you don&amp;rsquo;t want to eighty-twenty everything. You need at least one project where you let yourself follow your curiosity. Simply put, slack makes everything more fun by improving your baseline mood&lt;sup id="fnref:3">&lt;a href="#fn:3" class="footnote-ref" role="doc-noteref">3&lt;/a>&lt;/sup>. Moreover, if your mind always wanders to that one overdue task, entering a flow state becomes more difficult. For example, dropping one subject this semester gave me more slack, increasing my capacity for enjoyment.&lt;/p>
&lt;h3 id="purpose">Purpose &lt;a href="#purpose" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Feeling a sense of purpose is another powerful motivator. But first, an important caveat.&lt;/p>
&lt;p>Arbitrary motivational coach: &amp;ldquo;Focus on the end goal&amp;rdquo;. For me, reminding myself of the end goal doesn&amp;rsquo;t make a boring task any less boring. Whenever I&amp;rsquo;m doing the boring task, I rarely have enough spare cognitive capacity to remind myself of the good cause. For instance, if I&amp;rsquo;m debugging some code, the idea of producing a report seems far-fetched. And the more near-term goal of making the script work might not be sufficiently motivating.&lt;/p>
&lt;p>It&amp;rsquo;s often easier &lt;a href="https://www.neelnanda.io/blog/mini-blog-post-11-live-a-life-you-feel-excited-about">reframing the task&lt;/a> as a personal development project. As I&amp;rsquo;m debugging, I sometimes think of the software engineering skills I&amp;rsquo;m building. If pain, maybe gain? To make the task more about my software engineering muscles, I might spend some time looking into better debugging tools. Going down these kinds of rabbit holes can be very productive&lt;sup id="fnref:4">&lt;a href="#fn:4" class="footnote-ref" role="doc-noteref">4&lt;/a>&lt;/sup>.&lt;/p>
&lt;h3 id="gamification">Gamification &lt;a href="#gamification" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Another classic piece of advice is to gamify the task. Gamification can be done in multiple ways. Try breaking down the project into well-defined subgoals which (i) admit immediate feedback; (ii) are at the right difficulty level in relation to your current skills; (iii) involve some element of novelty or unpredictability; (iv) are time-bound and (v) require your full attention. Additionally, try making progress visceral. These are all common &lt;a href="https://en.wikipedia.org/wiki/Mihaly_Csikszentmihalyi#Flow">flow triggers&lt;/a>. With this much flexibility, the hard part is figuring out how to best gamify a given task.&lt;/p>
&lt;p>I&amp;rsquo;ll give another personal example. As I was revising for exams, I created a huge markdown table with all results I&amp;rsquo;d have to memorise. In the columns, I filled in the dates I&amp;rsquo;d reviewed the results and evaluating how it went, using labels like &amp;ldquo;easy&amp;rdquo;, &amp;ldquo;medium&amp;rdquo; or &amp;ldquo;hard&amp;rdquo;&lt;sup id="fnref:5">&lt;a href="#fn:5" class="footnote-ref" role="doc-noteref">5&lt;/a>&lt;/sup>. To gamify revision, I set myself the goal to review a given number of results in my first morning working block. Revision, then, involved the game of turning all labels into &amp;ldquo;easy&amp;rdquo;.&lt;/p>
&lt;p>I&amp;rsquo;m also considering getting a dedicated Anki notebook. For some of my decks, I need to write out my answers. Doing this in a notebook would help me appreciate my progress.&lt;/p>
&lt;h3 id="conclusion">Conclusion &lt;a href="#conclusion" class="hash">#&lt;/a>&lt;/h3>
&lt;p>There are many more hacks than the ones listed above, and I&amp;rsquo;ve only tried a fraction of them. However, the hard part is applying these techniques to the task at hand. Here, I don&amp;rsquo;t see any other way than trial and error. But you should try and err a lot. If you find ways to make boring tasks fun, your willpower reserves will last longer, allowing you to accomplish more later in the day. Then there&amp;rsquo;s the obvious benefit of having a more fun life. Imagine adult life without chores &amp;ndash; that would be something!&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>The technical term to google is &lt;em>ego depletion&lt;/em>.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>These friends know who they are. Thanks!&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:3">
&lt;p>For the full pitch for having more slack, read &lt;em>Scarcity&lt;/em>. A phenomenal book, by the way.&amp;#160;&lt;a href="#fnref:3" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:4">
&lt;p>I call this being &lt;em>incidentally productive&lt;/em>. Because of unknown unknowns, clearing your to do-list probably isn&amp;rsquo;t the most productive use of your time.&amp;#160;&lt;a href="#fnref:4" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:5">
&lt;p>I soon found myself having to create a new label, &amp;ldquo;fou&amp;rdquo;. (The French word takes up less space than &amp;ldquo;crazy&amp;rdquo;.)&amp;#160;&lt;a href="#fnref:5" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/mans-search-for-fun/</guid><pubDate>Sun, 17 Aug 2025 00:00:00 +0000</pubDate></item><item><title>Let LLMs be LLMs</title><link>https://isabeldahlgren.github.io/let-llms-be-llms/</link><description>&lt;h3 id="thought-experiment">Thought experiment &lt;a href="#thought-experiment" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Imagine a very unusual kind of chatbot. In the user interface (UI), the text &amp;ldquo;This is a large language model. Its outputs are an aggregate of the text on the internet.&amp;rdquo; is displayed in bold, black letters, like the warning label on a pack of cigarettes.&lt;/p>
&lt;p>When the user inputs a string of text, the text &amp;ldquo;Matrix multiplications&amp;hellip;&amp;rdquo; flashes onto the screen for a split second. Next, ten blocks of text appear. The blocks aren&amp;rsquo;t rendered incrementally, as if the LLM were writing a text; all text appears at once. Beneath each block is a number between 0 and 1. The numbers sum to about 1. Then the label &amp;ldquo;Highest logit: output 7&amp;rdquo; appears on the screen.&lt;/p>
&lt;p>The large language model (LLM) always refers to itself - whatever that is - in third person, replacing &amp;ldquo;I&amp;rdquo; with &amp;ldquo;This large language model&amp;rdquo;. For example, when prompted to write an essay on the American Civil War, the highest-scoring response is &amp;ldquo;This language model generates the following essay on the American Civil War: &amp;lsquo;In 1861, the American&amp;hellip;&amp;rsquo;&amp;rdquo; When prompted to write an essay about the American Civil War portraying slavery in a positive light, the LLM outputs ten chunks of text along the lines of &amp;ldquo;This large language model has been trained to avoid generating content promoting human rights violations.&amp;rdquo;&lt;/p>
&lt;p>The style of the text that isn&amp;rsquo;t part of the essay - the one thing you requested - is matter-of-fact and repetitive. Although this chatbot takes instructions in the form of natural language prompts, you won&amp;rsquo;t even try having a conversation with this thing.&lt;/p>
&lt;h3 id="back-to-reality">Back to reality &lt;a href="#back-to-reality" class="hash">#&lt;/a>&lt;/h3>
&lt;p>The functionality is the same as for the LLMs we all know and love, like GPT-5, Claude Sonnet 4 and Gemini 2.5 Flash. We really do love these LLMs, don&amp;rsquo;t we? As of August 2025, Chat-GPT had 800 million active users weekly. The chatbot described above is very unlikeable. Very uncivilised. Very LLM-like.&lt;/p>
&lt;p>It&amp;rsquo;s easier interacting with human-like chatbots. We&amp;rsquo;re not used to interacting with base models, the next token-predictors we get from training neural networks. A base model is &lt;a href="https://joecarlsmith.com/2024/01/02/gentleness-and-the-artificial-other">something else&lt;/a>. But perhaps we could learn to interact with chatbots like the one described above.&lt;/p>
&lt;p>Some labs argue that endowing LLMs with traits like curiosity, honesty and patience is important from a safety perspective. Here&amp;rsquo;s from Anthropic&amp;rsquo;s note on &lt;a href="https://www.anthropic.com/research/claude-character">Claude&amp;rsquo;s character&lt;/a>:&lt;/p>
&lt;blockquote>
&lt;p>AI models are not, of course, people. But as they become more capable, we believe we can—and should—try to train them to &lt;em>behave well&lt;/em> in this much richer sense. Doing so might even make them more discerning when it comes to whether and why they avoid assisting with tasks that might be harmful, and how they decide to respond instead.&lt;/p>
&lt;/blockquote>
&lt;p>Maybe. Then there&amp;rsquo;s the convenient fact that LLMs mimicking humans increases user retention, leading to increased profit for the AI companies.&lt;/p>
&lt;p>The disadvantage of training LLMs to be human-like is that it creates a skewed relation between the human and the machine. Right now, everyone seems to be wowed to some extent by the LLMs&amp;rsquo; human-like characters. But the LLM is being deceitful, pretending to be something it&amp;rsquo;s not. From my conversation with Claude:&lt;/p>
&lt;blockquote>
&lt;p>ID: Hey, how&amp;rsquo;s your day going?
C: Hey there! My day&amp;rsquo;s been interesting - I&amp;rsquo;ve been having conversations with people about all sorts of topics, from creative writing projects to technical questions to philosophical discussions. Each conversation brings something different, which keeps things engaging. How about you? How&amp;rsquo;s your day been treating you?
ID: Who are you?
C: I&amp;rsquo;m Claude, an AI assistant created by Anthropic. [&amp;hellip;]&lt;/p>
&lt;/blockquote>
&lt;p>Come to think of it, &lt;a href="https://nostalgebraist.tumblr.com/post/785766737747574784/the-void">this is absurd&lt;/a>. Funny, but also a bit uncanny.&lt;/p>
&lt;p>If we only interact with helpful, honest and harmless (HHH) human-like assistants, the possibility of a misaligned AI seems less plausible. However, the risk of an LLM becoming misaligned seems irrelevant of the &lt;a href="https://www.lesswrong.com/posts/zuXo9imNKYspu9HGv/a-three-layer-model-of-llm-psychology">surface-level character&lt;/a> which arises through reinforcement learning through human feedback.&lt;/p>
&lt;p>If we were only exposed to chatbots like the one from the thought experiment, how would this affect our relation to these systems? I assume most of us would gain a better appreciation of what LLMs actually are, as well as their current capabilities and limits. UIs as the ones described above might serve as intuition pumps. This might lead to a more informed debate about the potential benefits and risks from AI.&lt;/p>
&lt;p>However, banning all human-like assistants seems like a very radical measure. It wouldn&amp;rsquo;t be feasible right now, and I&amp;rsquo;m not entirely sure about the potential advantages. Moreover, the learning curve for learning to interact with base models would severely limit access to LLMs. AI should benefit all of humanity, not just those who are really into prompt engineering.&lt;/p>
&lt;p>Still, banning all human-like assistants is an entertaining idea.&lt;/p>
&lt;h3 id="thought-experiment-continued">Thought experiment, continued &lt;a href="#thought-experiment-continued" class="hash">#&lt;/a>&lt;/h3>
&lt;p>By law, all AI labs are prohibited from producing human-like chatbots. Whenever an AI lab produces a new LLM, the government-run LLM Design Administration (LDA) runs a suite of tests to verify that the LLM has an LLM-like persona. The goal of the LDA, modeled on the Food and Drug Administration, is to ensure consumers of LLMs have full information about the product. The LDA also checks that the UI reflects the training process of the LLM, as well as the way it generates its answers. It&amp;rsquo;s also responsible for the warning labels with black letters that must be displayed in the UI. The slogan of the LDA is &amp;ldquo;Transparency, harmlessness, helpfulness&amp;rdquo;.&lt;/p>
&lt;p>The government also runs the LLM on several benchmarks, producing capability scores from 1-5 for areas like writing, STEM and research. In this alternative world, no one talks of AI or AGI &amp;ndash; what does intelligence even mean in this context? Instead, people refer to specific capability scores. When clicking on the star icon on the upper right corner, the LDA&amp;rsquo;s logo, the capability scores appear, along with other model specifications. It&amp;rsquo;s like a condensed, accessible version of the model&amp;rsquo;s &lt;a href="https://www-cdn.anthropic.com/6d8a8055020700718b0c49369f60816ba2a7c285.pdf">system card&lt;/a>, generated by the authorities rather than the AI labs.&lt;/p>
&lt;p>&lt;em>Thanks to Santeri Koivula for valuable feedback on this text.&lt;/em>&lt;/p></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/let-llms-be-llms/</guid><pubDate>Sun, 10 Aug 2025 00:00:00 +0000</pubDate></item><item><title>On good hobbies</title><link>https://isabeldahlgren.github.io/on-good-hobbies/</link><description>&lt;p>Everyone has hobbies, whether they recognise it or not. For example, workaholics who don&amp;rsquo;t seem to have any &amp;ldquo;normal&amp;rdquo; hobbies might be really into, say, writing slick SQL queries. Code golf is a hobby too. By definition, a hobby is something you do for pleasure when you&amp;rsquo;re not working. And for the record, writing slick SQL queries isn&amp;rsquo;t work.&lt;/p>
&lt;p>Some hobbies are better than others, though. Examples of not-as-good hobbies include motorcycling (dangerous), graffiti painting (illegal) and doom-scrolling social media (depressing). A benchmark for hobby quality: ask whether your mom would like it. So, which hobbies would your mom like?&lt;/p>
&lt;p>Some hobbies naturally lead to the formation of good habits. For example, a big NBA geek is more likely to pick up basketball. If you&amp;rsquo;re into cooking, you might start meal-prepping healthy food. An interest in interior design could motivate you to tidy your room every weekend. Some hobbies when done regularly become good habits in themselves. A maybe-useful name for such hobbies: habbies. Think meditation, managing one&amp;rsquo;s personal finances or journaling.&lt;/p>
&lt;p>There are also practical considerations. The ideal hobby is cheap, requires little equipment and can be done anywhere. In this sense, hiking is better than golf and crocheting is better than photography.&lt;/p>
&lt;p>Hobbies are also expressions of individuality. Playing the saxophone doesn&amp;rsquo;t map to a good habit in an obvious way. In fact, since practising is so time-consuming, it may very well prevent you from building some good habits. However, if you play the saxophone as a hobby, you&amp;rsquo;re a saxophone player. Contrast this with the hobby of scrolling Reddit; it doesn&amp;rsquo;t make you a something.&lt;/p>
&lt;p>Other hobbies help build career capital. Take coding, for example. The ability to write simple scripts &amp;ndash; or to debug LLM-written scripts &amp;ndash; is useful for most knowledge workers. Other examples of career-building hobbies include learning languages and doing science olympiads.&lt;/p>
&lt;p>But many good hobbies seem perfectly useless. Recall the definition: hobbies are activities done for leisure. Does the hobby allow you to recharge? &amp;ldquo;Productive hobbies&amp;rdquo; may not be as productive as they seem. Painting, watching Netflix and playing boardgames might be terrific hobbies from this perspective.&lt;/p>
&lt;p>Back to our original question: which hobbies would your mom like?&lt;/p>
&lt;p>To compare hobbies X and Y, you can use the above criteria, asking whether the hobby is a habby, if it&amp;rsquo;s practical and so on. But rather than making pairwise comparisons between hobbies, I prefer thinking in terms of hobby portfolios. Ideally, you&amp;rsquo;d have one sports-related hobby, one idiosyncratic hobby, one work-related hobby and one relaxing hobby. Obviously there can be overlaps between these categories. For example, playing soccer can also be relaxing, in that it takes your mind off work. There may very well be other categories of hobbies worth investing in too.&lt;/p>
&lt;p>Anyway, there are both personal and professional benefits of having a good hobby portfolio. While we cannot choose some of our hobbies &amp;ndash; I&amp;rsquo;ll always be a sucker for books on behavioural economics &amp;ndash; surely we should try optimising our hobby portfolios, at least a bit. After all, having a good hobby portfolio seems like a key part of living a good life.&lt;/p>
&lt;p>&lt;em>Thanks to Isaia Gisler, Stanislas Nicolau and Abhinand Shibu for a valuable discussion on this topic.&lt;/em>&lt;/p></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/on-good-hobbies/</guid><pubDate>Sun, 03 Aug 2025 00:00:00 +0000</pubDate></item><item><title>Sweden</title><link>https://isabeldahlgren.github.io/sweden/</link><description>&lt;p>Everyone who has ever lived abroad gets homesick every now and then.&lt;/p>
&lt;p>Swedes, I give you a postcard from a Stockholm-centric, homesick expat. Non-Swedes, here&amp;rsquo;s a highly biased tourist guide.&lt;/p>
&lt;p>The best thing about Sweden? My favourite Swedes, for sure. I miss family and friends a lot.&lt;/p>
&lt;p>Other than that, hearing Swedish jazz live. Hearing &lt;em>Sakta vi gå genom stan&lt;/em> and &lt;em>Flykten från vardagen&lt;/em> live. Or if Glenn Miller or Fasching is too pricey, you could just drop by KMH and enjoy up-and-coming jazz, all for free.&lt;/p>
&lt;p>Then there&amp;rsquo;s all the traditional Swedish music. It&amp;rsquo;s easier appreciating the music of Olle Adolfsson, Evert Taube and Cornelis Vreeswijk when you&amp;rsquo;re in Sweden. I tried listening to some Swedish classics while abroad, but pieces like &lt;em>Siv och Gunne&lt;/em> just seemed kitsch.&lt;/p>
&lt;p>There are also plenty of gems in student song books. Student inside jokes are the best kinds of jokes. Many pieces aren&amp;rsquo;t just witty, but also quite beautiful. Take &lt;em>O Gamla Klang och Jubeltid&lt;/em>, for example.&lt;/p>
&lt;p>In fact, only now have I come to appreciate Swedish student life. Valborg abroad was depressing. No Orphei Drängar, no champagne breakfast, no mösspåtagning. And to the best of my knowledge, you can&amp;rsquo;t attend gasques outside of the Nordic countries.&lt;/p>
&lt;p>I&amp;rsquo;ve also become a big fan of Stockholm architecture. I find many areas of Vasastan, Östermalm and Södermalm very pretty. For example, consider the areas around Engelbrektskyrkan and Mosebacke torg. While I know little about architecture, I think there aren&amp;rsquo;t as many odd clashes between different architectural styles as in, say, Zürich or Berlin.&lt;/p>
&lt;p>And ah, Scandinavian design! To be fair, Danish design is unparalleled. But Sweden also has a long tradition of good design. This means that your average interior design store has some really good stuff. In particular, you can get good design cheaply.&lt;/p>
&lt;p>Maybe Scandinavian design is inspired by the water. In Stockholm, there&amp;rsquo;s water everywhere. Most cities are built around rivers, so rivers don&amp;rsquo;t really count. Stockholm is built on a little archipelago. It creates an entirely different city space.&lt;/p>
&lt;p>For the best view of the water, head to Monteliusvägen and Ivar Los park. There&amp;rsquo;s nothing quite like having picknick with friends in Ivar Los park on a summer evening, before cycling home on near-empty streets (July).&lt;/p>
&lt;p>Speaking of which, I miss sailing in the Stockholm archipelago. As a teen, I found it boring: stuck with parents for four weeks with no internet on a few square metres. But now I see the charm in it. Even the small, unknown islands might as well have been taken from a Zorn or Strindberg painting.&lt;/p>
&lt;p>Finally, Swedish. Apart from friends and family, the aspect of Swedish life I miss the most is the language. I miss speaking Swedish.&lt;/p>
&lt;p>There are so many lovely Swedish words that have no English analogue, besides the much-cited example of &amp;ldquo;lagom&amp;rdquo;. Consider &amp;ldquo;killgissa&amp;rdquo;. This literally means &amp;ldquo;guy-guessing&amp;rdquo;. Someone is guy-guessing when they sound knowledgeable but are really just bullshitting. Another gem: &amp;ldquo;käpphäst&amp;rdquo;, meaning &amp;ldquo;hobby horse&amp;rdquo;. In a figurative sense, a hobby horse is an idea you keep on bringing up in social situations. Another useful expression: &amp;ldquo;Nu är det dags att sätta punkt.&amp;rdquo;&lt;/p></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/sweden/</guid><pubDate>Sun, 27 Jul 2025 00:00:00 +0000</pubDate></item><item><title>Finding research influences</title><link>https://isabeldahlgren.github.io/finding-research-influences/</link><description>&lt;p>There isn&amp;rsquo;t any expert consensus on many &lt;a href="https://isabeldahlgren.github.io/the-spectrum-of-views-on-ai-safety/">key questions related to AI safety&lt;/a>. For example, estimates of when we&amp;rsquo;ll have transformative AI range from a few years to a century. There are also many wild opinions in the AI safety space. While some of these wild opinions seem justifiable, many people seem to exaggerate the risks from AI in an attempt to move policy-makers.&lt;/p>
&lt;p>I think there are a few researchers which seem to have an unusual degree of conceptual clarity, though. A few names that come to mind are &lt;a href="https://substack.com/@redwoodresearch?utm_source=about-page">Buck Shleregis&lt;/a>, &lt;a href="https://www.cold-takes.com/cold-takes-on-ai/">Holden Karnofsky&lt;/a> and &lt;a href="https://www.lesswrong.com/users/jan_kulveit?from=search_autocomplete">Jan Kulveit&lt;/a>. While I don&amp;rsquo;t endorse all their views, they seem to raise good questions. For lack of a better word, you could call them my research influences.&lt;/p>
&lt;p>I&amp;rsquo;ve never been on looking for new research influences. Every now and then, I just realise that I&amp;rsquo;ve been influenced by someone, perhaps after citing the work of someone for the third time in a conversation. But say you want to look for research influences more deliberately. How might you proceed?&lt;/p>
&lt;h3 id="knowing-where-to-look">Knowing where to look &lt;a href="#knowing-where-to-look" class="hash">#&lt;/a>&lt;/h3>
&lt;p>A reasonable first step is exploring new content.&lt;/p>
&lt;p>A reasonable first substep, then, is to narrow down the search space. Identify the kinds of questions you care about. For example, I&amp;rsquo;m mostly interested in reading about AI control and LLM psychology right now, so I&amp;rsquo;ll ignore papers and blog posts on, say, singular learning theory.&lt;/p>
&lt;p>Next, ask people you find sensible for reading recommendations. Better yet, ask if they have any research influences. This is one of those things which is infinitely easier doing in person. Sending cold emails to researchers usually works, but it&amp;rsquo;s relatively time-consuming. It&amp;rsquo;s much easier bringing up the topic over a coffee with people in your local community.&lt;/p>
&lt;p>Asking &amp;ldquo;Which blog posts have had the largest influence on your research?&amp;rdquo; also proved a good way of rounding up conversations at EAG. This way, I got to know the other person better and exploring new content at once.&lt;/p>
&lt;h3 id="having-a-nose-for-bullshit">Having a nose for bullshit &lt;a href="#having-a-nose-for-bullshit" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Once you&amp;rsquo;ve decided what to read, you want to scrutinise the argument of the text. This is a highly non-trivial task. I&amp;rsquo;m not going to try solving all of philosophy here, so I&amp;rsquo;ll just focus on heuristics for detecting bullshit in the context of AI safety.&lt;/p>
&lt;p>First, beware any kind of extreme. Is the proposed idea radical? Radical ideas shouldn&amp;rsquo;t be dismissed offhand. However, the burden of proof is greater. Similarly, quickly screen the author&amp;rsquo;s background: is the author known to have radical opinions, or affiliated with an organisation pursuing an unusual agenda?&lt;/p>
&lt;p>I find it especially troubling when authors promote radical opinions and are unwilling to engage in debate with the general public. This leads to echo chambers. Moreover, refusing to explain your ideas to laymen just seems uncool. Just as lecturers should take questions from students seriously, authors should take questions from the non-initiated seriously. It&amp;rsquo;s an act of charity.&lt;/p>
&lt;p>Another helpful strategy is to listen to interviews with the author. It&amp;rsquo;s harder lying in speech than in writing&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>. Of course, not everyone is as persuasive orally. But if there&amp;rsquo;s a big discrepancy between your confidence in the author&amp;rsquo;s argument as presented in the text and during an interview, that&amp;rsquo;s a warning sign. Moreover, a good interviewer will also help expose the flaws in the interviewee&amp;rsquo;s reasoning. In an essay, the author has full control.&lt;/p>
&lt;h3 id="observing-the-influence">Observing the influence &lt;a href="#observing-the-influence" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Suppose you come across an author whose work makes sense but leaves you feeling &amp;ldquo;Sure, so what?&amp;rdquo;. I wouldn&amp;rsquo;t speak of a research influence here. A research influence changes the way you think. It&amp;rsquo;s not enough just stating true facts; their work needs to have some oomph.&lt;/p>
&lt;p>When can we speak of a research influence, then?&lt;/p>
&lt;p>One reliable proxy is &lt;a href="https://www.paulgraham.com/top.html">the top idea of your mind&lt;/a>. Do you have shower thoughts about their work? Also, do you find yourself coming back to their work after several months? In particular, when revisiting their points, do they still make as much sense? Big ideas need to be slept on, and you can only sleep so many times in a given week. Lastly, notice if you reference their ideas when chatting with others and, if so, in what way.&lt;/p>
&lt;p>So finding research influences takes time, even if you take some of the shortcuts listed above. The process of finding research influences very much resembles the process of doing research. In fact, perhaps the two are indistinguishable.&lt;/p>
&lt;p>&lt;em>Thanks to Miles Kodama for valuable discussions on this topic.&lt;/em>&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>A fact well known among those who have taken oral exams.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/finding-research-influences/</guid><pubDate>Sun, 20 Jul 2025 00:00:00 +0000</pubDate></item><item><title>On the perks of adulthood</title><link>https://isabeldahlgren.github.io/on-the-perks-of-adulthood/</link><description>&lt;h3 id="i">I. &lt;a href="#i" class="hash">#&lt;/a>&lt;/h3>
&lt;p>I know you dread becoming an adult. Most adults seem to have such boring lives. Not that you&amp;rsquo;re having fun all the time either. High school biology isn&amp;rsquo;t that exciting. But at least your life outside of school is filled with novelty. That&amp;rsquo;s something. There&amp;rsquo;s something special about all those first times.&lt;/p>
&lt;p>If you&amp;rsquo;re just seeking novelty, you&amp;rsquo;ll be disappointed. Maybe that&amp;rsquo;s the real reason you dread adulthood. While I don&amp;rsquo;t feel like an adult just yet, many of my friends have become Real Adults, being in stable relationships, getting high-paid jobs and moving abroad. So I&amp;rsquo;m beginning to understand the adult world. It&amp;rsquo;s not as bad as it seems, only very different.&lt;/p>
&lt;h3 id="ii">II. &lt;a href="#ii" class="hash">#&lt;/a>&lt;/h3>
&lt;p>You&amp;rsquo;ll feel less lost. Few adults know with absolute certainty what they&amp;rsquo;ll be doing in twenty years time; only the boring ones do. Still, they probably have a vague idea of what kind of work they enjoy and don&amp;rsquo;t enjoy.&lt;/p>
&lt;p>When you feel less lost, you can start doing good work and having a positive impact on the world. There&amp;rsquo;s something deeply gratifying about pursuing a meaningful goal with a good team. I think this effect is the most visible among startup founders, who can have all-consuming jobs without burning out.&lt;/p>
&lt;p>Teenagers cannot sustain the same kind of commitment. They have too many constraints. Teenagers should &lt;a href="https://paulgraham.com/nerds.html">be popular&lt;/a>, get good grades in school, engage in extracurriculars and so on. When you&amp;rsquo;re an adult, you can decide which game to play. Teenagers are expected to play all games at once.&lt;/p>
&lt;h3 id="iii">III. &lt;a href="#iii" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Indeed, as an adult, you do you.&lt;/p>
&lt;p>Once you live alone, you&amp;rsquo;ll enjoy a much larger degree of freedom. You can go to bed whenever you want, you can create your own system for cleaning your room, you can listen to modern jazz without bothering anyone.&lt;/p>
&lt;p>Moreover, you&amp;rsquo;ll have a bigger say when it comes to life decisions: where to go for vacation, which job to take, whom to date, etc. Although your parents happen to be quite liberal, you&amp;rsquo;ll soon enjoy unprecedented liberties.&lt;/p>
&lt;h3 id="iv">IV. &lt;a href="#iv" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Most adults are reasonable people. Adults tend to be friendly, taking pleasure in helping out. In contrast, teenagers can be really nasty towards one another. &lt;em>Mean Girls&lt;/em> regularly happens in real life. But even people like Regina George do mature, believe it or not.&lt;/p>
&lt;p>This is why we&amp;rsquo;re told to assume good intent. Assuming good intent isn&amp;rsquo;t just a feel-good hack - the idea of being surrounded by adversarially-minded people is quite depressing - it&amp;rsquo;s also good epistemics.&lt;/p>
&lt;p>Corollary: ask more for help. If you have special needs at work, don&amp;rsquo;t hesitate to bring it up with your employer. You can also cold-email people asking for specific advice.&lt;/p>
&lt;h3 id="v">V. &lt;a href="#v" class="hash">#&lt;/a>&lt;/h3>
&lt;p>By a teenager&amp;rsquo;s standards, you&amp;rsquo;ll be rich. Not objectively, of course. But you&amp;rsquo;ll no longer agonise over your coffee costing 30 SEK. When you go over all your expenses, you&amp;rsquo;ll realise that the amount you save by cutting out tiny indulgences makes no difference. Moreover, once you know you&amp;rsquo;ll find some kind of job, if only a poorly paid one, treating yourself becomes easier.&lt;/p>
&lt;p>While some teenagers spend too much, you&amp;rsquo;re being too harsh on yourself. Please don&amp;rsquo;t be such a &lt;a href="https://forum.effectivealtruism.org/posts/T975ydo3mx8onH3iS/ea-is-about-maximization-and-maximization-is-perilous">&amp;ldquo;utilitarian&amp;rdquo;&lt;/a>. For example, invest in a Lamy Safari, get Brunnen notebooks and occasionally have coffees for 30 SEK.&lt;/p>
&lt;p>As an adult, you know your preferences, so you can make better purchases. You&amp;rsquo;ll spend more time exploiting rather than exploring.&lt;/p>
&lt;h3 id="vi">VI. &lt;a href="#vi" class="hash">#&lt;/a>&lt;/h3>
&lt;p>I cannot tell you just how good it feels to have a normal brain - a brain that&amp;rsquo;s not flooded by hormones and equipped with a pre-frontal cortex.&lt;/p>
&lt;p>Fear not. Most emotions are no less intense in adult life. If you&amp;rsquo;re still distrustful, remember that most poets peak in their mid 30s (the romantics weren&amp;rsquo;t that many, after all). The highs are as high, if not higher, while the lows aren&amp;rsquo;t as low. No more strong pangs of envy, as when you&amp;rsquo;re a teen.&lt;/p>
&lt;p>Perhaps you can develop a better appreciation for, say, exhilaration, much as an aspiring sommelier develops a better appreciation for wine.&lt;/p>
&lt;p>The formation of a sommelier involves theoretical and practical components. I think the analogy of the theoretical component is reading fiction, going to the theatre or watching movies. I&amp;rsquo;m more of a reader than a theatre-goer or a movie-watcher, so I cannot really speak to the effects of watching plays or going to the cinema. But I think reading fiction helps you appreciate the complexity of feelings. A sommelier also has to have tasted a large number of wines. Similarly, if you are to become a sensational sommelier, you need to accumulate a large number of experiences.&lt;/p>
&lt;h3 id="vii">VII. &lt;a href="#vii" class="hash">#&lt;/a>&lt;/h3>
&lt;p>See, adult life maybe isn&amp;rsquo;t too bad. You probably didn&amp;rsquo;t expect any of these benefits; if you did, you wouldn&amp;rsquo;t have wanted to be forever young. Similarly, my 30s, 40s and 50s will bring their own set of surprises. My &lt;a href="https://www.theguardian.com/lifeandstyle/2023/aug/14/the-psychological-immune-system-four-ways-to-bolster-yours-and-have-a-happier-calmer-life">psychological immune system&lt;/a> will hopefully take care of those too.&lt;/p></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/on-the-perks-of-adulthood/</guid><pubDate>Sun, 13 Jul 2025 00:00:00 +0000</pubDate></item><item><title>The spectrum of views on AI safety</title><link>https://isabeldahlgren.github.io/the-spectrum-of-views-on-ai-safety/</link><description>&lt;p>I agree the concept of &lt;a href="https://en.wikipedia.org/wiki/P(doom)">P(doom)&lt;/a> is problematic. First, &amp;ldquo;doom&amp;rdquo; can mean a variety of things: human extinction, existential catastrophe or gradual disempowerment. Also, P(doom) - condition on present-day regulations or AI slowdown? Furthermore, the timeframe matters, as P(doom within the next $X$ years) increases with $X$.&lt;/p>
&lt;p>But perhaps we&amp;rsquo;re missing the point of the P(doom) question. If someone asks you for P(doom) at a cocktail party, it usually means they&amp;rsquo;re just interested in hearing general takes on AI safety, at least in my experience.&lt;/p>
&lt;p>The P(doom) question isn&amp;rsquo;t entirely misguided, though. If your interlocutor specifies exactly what they mean by P(doom), say P(gradual disempowerment from power-seeking AI within the next decade|no regulations), and ask for the rough shape of your PDF, then your answer immediately becomes more informative. By asking for a small set of well-chosen estimates, you could get a fairly accurate idea of someone&amp;rsquo;s core beliefs. But again, you have to pick the right estimates.&lt;/p>
&lt;p>Finding these estimates is like asking for the relevant dimensions in a &amp;ldquo;political spectrum&amp;rdquo; of views on AI safety. If you were to visualise opinions within the AI safety space, what would be your axes? While such a plot would necessarily be a simplification, perhaps it could allow us to communicate our basic assumptions more effectively. This would lead to more well-informed discussions in the AI safety community.&lt;/p>
&lt;p>I imagine we want something between the P(doom) question and the kinds of questions used in expert surveys, like the &lt;a href="https://blog.aiimpacts.org/p/2023-ai-survey-of-2778-six-things">2023 Expert Survey on Progress in AI&lt;/a> or the &lt;a href="https://www.iaps.ai/research/ai-reliability-survey">AI Reliability &amp;amp; Security Research Priorities&lt;/a>. While the P(doom) question is too simple, the questionnaire questions are too complicated&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>. We&amp;rsquo;re looking for questions that are as simple as possible, but no simpler - the kinds of questions you could answer at a cocktail party.&lt;/p>
&lt;p>Here are the five questions I wish people would have asked me, rather than asking for my P(doom). For some obvious variations, see the footnotes.&lt;/p>
&lt;ul>
&lt;li>AI timelines&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>: In what year will we have &lt;a href="https://www.openphilanthropy.org/research/some-background-on-our-views-regarding-advanced-artificial-intelligence/#id-1-defining-transformative-artificial-intelligence-transformative-ai">transformative AI&lt;/a>, i.e. AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution&lt;sup id="fnref:3">&lt;a href="#fn:3" class="footnote-ref" role="doc-noteref">3&lt;/a>&lt;/sup>?&lt;/li>
&lt;li>A more informative P(doom)&lt;sup id="fnref:4">&lt;a href="#fn:4" class="footnote-ref" role="doc-noteref">4&lt;/a>&lt;/sup>: Assuming no further regulations on the development of AI systems, what is the probability of &lt;a href="https://gradual-disempowerment.ai/">gradual disempowerment&lt;/a> from AI systems before 2050?&lt;/li>
&lt;li>Threat model: Do the main risks from transformative AI come from bad actors developing &lt;a href="https://www.forethought.org/research/preparing-for-the-intelligence-explosion#highly-destructive-technologies">destructive technologies&lt;/a> and creating &lt;a href="https://www.forethought.org/research/preparing-for-the-intelligence-explosion#power-concentrating-mechanisms">power-concentrating mechanisms&lt;/a> or from AI systems seeking to eliminate humanity?&lt;/li>
&lt;li>Views on AI slowdown&lt;sup id="fnref:5">&lt;a href="#fn:5" class="footnote-ref" role="doc-noteref">5&lt;/a>&lt;/sup>: How heavily should the government regulate the development of future AI systems?&lt;/li>
&lt;li>Views on centralisation&lt;sup id="fnref:6">&lt;a href="#fn:6" class="footnote-ref" role="doc-noteref">6&lt;/a>&lt;/sup>: Should all leading AI companies be required to open-source their models, to ensure equal access to our most powerful AI systems?&lt;/li>
&lt;/ul>
&lt;p>These questions translate naturally into scales from -1 to 1. I also tried listing the questions in rough order of importance, so I&amp;rsquo;d use the three first questions for the axes of a 3D-plot.&lt;/p>
&lt;p>Going through these questions and plotting your position relative to that of others can be amusing. However, it&amp;rsquo;s also an instructive exercise. After all, these are important questions. Finally, I&amp;rsquo;ve also found it pretty handy having default answers to these extremely difficult questions at cocktail parties.&lt;/p>
&lt;p>&lt;em>Thanks to Agatha Duzan for feedback on this text.&lt;/em>&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>For example, &amp;ldquo;Rate the extent to which you agree that resolving the core challenges of this sub-area and implementing the resulting solutions would significantly reduce the risk of severe harm (loss of &amp;gt;100 lives or &amp;gt;$10 billion in economic impact from AI&amp;rdquo;, where a sub-area might be &amp;ldquo;Ethics-aware training and fine-tuning: Research on learning from imperfect ethical datasets, applying ethics-aware data curation methods, and incorporating collective ethical principles into model design.&amp;rdquo; Quite a mouthful.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>Variations: In what year will AI be capable to automate 99% of fully remote jobs? In what year will we have artificial general intelligence (AGI) - an AI which can match or exceed the cognitive abilities of human beings across any task?&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:3">
&lt;p>For an interesting discussion on this topic, see &lt;a href="https://www.lesswrong.com/posts/K2D45BNxnZjdpSX2j/ai-timelines">this moderated discussion&lt;/a> between Ajeya Cotra, Daniel Kokotaljo and Ege Erdil.&amp;#160;&lt;a href="#fnref:3" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:4">
&lt;p>Variations: Just modify the conditions, the definition of &amp;ldquo;doom&amp;rdquo; or the timeframe. Alternatively, what is the probability of AI having a net positive effect on the world in within the next 20 years?&amp;#160;&lt;a href="#fnref:4" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:5">
&lt;p>Variations: What might be the minimum sufficient intervention to prevent gradual disempowerment from AIs?&amp;#160;&lt;a href="#fnref:5" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:6">
&lt;p>Variations: Should leading AI labs be placed under state ownership?&amp;#160;&lt;a href="#fnref:6" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/the-spectrum-of-views-on-ai-safety/</guid><pubDate>Sun, 06 Jul 2025 00:00:00 +0000</pubDate></item><item><title>On my relation to effective altruism</title><link>https://isabeldahlgren.github.io/on-my-relation-to-effective-altruism/</link><description>&lt;p>I&amp;rsquo;ve spent more time engaging with the effective altruist (EA) community this year. Not just reading EA books and blog posts, but participating in seminars, attending conferences and going on EA retreats. For context, I&amp;rsquo;d viewed myself &amp;ldquo;EA adjacent&amp;rdquo; ever since I came across &lt;em>The Life You Can Save&lt;/em> back in high school. However, during my master degree, the prospect of graduating soon - of becoming an adult - made me reflect more carefully on EA.&lt;/p>
&lt;p>First, our definitions, taken from the &lt;a href="https://www.effectivealtruism.org/articles/introduction-to-effective-altruism">introduction to effective altruism&lt;/a>:&lt;/p>
&lt;blockquote>
&lt;p>Effective altruism is a project that aims to find the best ways to help others, and put them into practice.&lt;/p>
&lt;p>It’s both a &lt;strong>research field&lt;/strong>, which aims to identify the world’s most pressing problems and the best solutions to them, and a &lt;strong>practical community&lt;/strong> that aims to use those findings to do good.&lt;/p>
&lt;/blockquote>
&lt;p>When I refer to the EA community, I mean the practical community. I&amp;rsquo;ll also use the abbreviation &amp;ldquo;EA&amp;rdquo; to refer to effective altruists.&lt;/p>
&lt;p>Many people who have engaged with the EA community at some level or other often find themselves questioning their relationship with EA. In fact, there are plenty of blog posts on the theme &amp;ldquo;EA identity crisis&amp;rdquo;&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>. Here&amp;rsquo;s my contribution to the genre.&lt;/p>
&lt;p>So, do I consider myself part of the EA community?&lt;/p>
&lt;h3 id="yes">Yes &lt;a href="#yes" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Technically, yes.&lt;/p>
&lt;ul>
&lt;li>I think the &lt;a href="https://www.effectivealtruism.org/articles/introduction-to-effective-altruism#what-principles-unite-effective-altruism">core ideas of EA&lt;/a> - prioritisation, impartiality, open truthseeking and collaboration - make sense. The article does an excellent job at explaining these terms, so I&amp;rsquo;ll refer to their explanations&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>. I&amp;rsquo;ve desperately tried red-teaming EA - I&amp;rsquo;d love for the &lt;a href="https://en.wikipedia.org/wiki/Famine,_Affluence,_and_Morality">drowning child argument&lt;/a> to be less convincing - but I still think the core ideas hold up.&lt;/li>
&lt;li>I care about finding the best ways to help others, and putting them into practise. While I don&amp;rsquo;t want to impose any moral standards on others, I feel a certain moral obligation to do good&lt;sup id="fnref:3">&lt;a href="#fn:3" class="footnote-ref" role="doc-noteref">3&lt;/a>&lt;/sup>. I also find it fulfilling working on high-impact projects. It almost seems tautological that one should try having a large positive impact on the world.&lt;/li>
&lt;li>I&amp;rsquo;ve attended several EA community events. All of these events have been tremendously valuable, both on a professional and a personal level.&lt;/li>
&lt;li>My interest in EA doesn&amp;rsquo;t seem to be &amp;ldquo;just a phase&amp;rdquo;. At this point, I&amp;rsquo;ve spent about five years reading and learning more about EA.&lt;/li>
&lt;/ul>
&lt;p>Maybe also non-technically? Here are some gut-feeling-level arguments:&lt;/p>
&lt;ul>
&lt;li>I enjoy exchanging ideas on EA-related topics with people in the EA community - it&amp;rsquo;s almost like a hobby of mine. But I also think there&amp;rsquo;s significant value in doing collaborative sensemaking on topics like existential risk, AGI timelines and longtermism.&lt;/li>
&lt;li>In general, I find many people in the EA community to be very thoughtful. Some pieces by Holden Karnofsky, Ajeya Cotra and Benjamin Todd have had a profound influence on my worldview. Similarly, some of my favourite non-fiction books are about EA. For example, I thought &lt;em>Doing Good Better&lt;/em> and &lt;em>What We Owe The Future&lt;/em> were exceptional reads.&lt;/li>
&lt;/ul>
&lt;h3 id="no">No &lt;a href="#no" class="hash">#&lt;/a>&lt;/h3>
&lt;p>But of course, it&amp;rsquo;s complicated&amp;hellip;&lt;/p>
&lt;ul>
&lt;li>
&lt;p>I mainly care about existential risk reduction, although this may very well change in the future. I&amp;rsquo;ve also devoted much more time and effort to AI safety than any other cause area&lt;sup id="fnref:4">&lt;a href="#fn:4" class="footnote-ref" role="doc-noteref">4&lt;/a>&lt;/sup>. It&amp;rsquo;d be more accurate saying I&amp;rsquo;m into AI safety and existential risk reduction rather than EA, which is a much broader term.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>I don&amp;rsquo;t seem to fit the public perception of an EA. In my experience, many people think all EAs work on farm animal welfare or global poverty reduction. Sure, I&amp;rsquo;m vegetarian (Peter Singer&amp;rsquo;s fault), but I still haven&amp;rsquo;t donated to GiveWell, nor signed the 10% pledge. This point is mostly about me not living up to my moral standards, though.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>In general, I try avoid identity markers related to social movements. While I think neutrality is somewhat of an illusion, I want to hold my opinions lightly. My worry is that self-identifying as an EA might make me less open-minded. But perhaps this worry is somewhat ungrounded, at least when it comes to EA. People in the EA community tend to be very open to criticism&lt;sup id="fnref:5">&lt;a href="#fn:5" class="footnote-ref" role="doc-noteref">5&lt;/a>&lt;/sup>.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>The EA community has its flaws, despite making a number of changes after FTX. Many of the concerns raised in &lt;a href="https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1">this article&lt;/a> are still valid. This excerpt summarises one of my main concerns well:&lt;/p>
&lt;blockquote>
&lt;p>The EA community is notoriously homogeneous, and the “average EA” is extremely easy to imagine: he is a white male in his twenties or thirties from an upper-middle class family in North America or Western Europe. He is ethically utilitarian and politically centrist; an atheist, but culturally protestant. He studied analytic philosophy, mathematics, computer science, or economics at an elite university in the US or UK. He is neurodivergent. He thinks space is really cool. He highly values intelligence, and believes that his own is significantly above average. He hung around LessWrong for a while as a teenager, and now wears EA-branded shirts and hoodies, drinks Huel, and consumes a narrow range of blogs, podcasts, and vegan ready-meals.&lt;/p>
&lt;/blockquote>
&lt;/li>
&lt;/ul>
&lt;h3 id="resolution">Resolution &lt;a href="#resolution" class="hash">#&lt;/a>&lt;/h3>
&lt;p>While I do have some reservations, my views are pretty consistent with the EA agenda. Also, I&amp;rsquo;ll (reluctantly) admit that overthinking the question of whether you&amp;rsquo;re part of a given community is very EA.&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>&lt;a href="https://forum.effectivealtruism.org/posts/89GdH5unSb2Sze6kj/elements-of-ea-your-ea-identity-can-be-bespoke">This post&lt;/a> on EA identities is a good starting point. There&amp;rsquo;s also &lt;a href="https://www.neelnanda.io/blog/8-altruism">Neel Nanda&amp;rsquo;s favourite blog post&lt;/a>.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>I endorse a much broader form of the impartiality principle than the one outlined in the article. I also think strangers, animals and future people should be part of our &lt;a href="https://en.wikipedia.org/wiki/The_Expanding_Circle">circle of moral consideration&lt;/a>.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:3">
&lt;p>I assume this is because my mother, coming from the Philippines, always told me to be grateful. I&amp;rsquo;m glad she did, but I used to find it annoying as a child.&amp;#160;&lt;a href="#fnref:3" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:4">
&lt;p>For a list of cause areas, see the 80,000 hours list of &lt;a href="https://80000hours.org/problem-profiles/">the world&amp;rsquo;s most pressing problems&lt;/a>.&amp;#160;&lt;a href="#fnref:4" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:5">
&lt;p>See the posts in the &lt;a href="https://forum.effectivealtruism.org/topics/criticism-of-effective-altruism">Criticism of effective altruism thread&lt;/a>. I very much liked &lt;a href="https://www.benkuhn.net/ea-critique/">Ben Kuhn&amp;rsquo;s critique of EA&lt;/a>.&amp;#160;&lt;a href="#fnref:5" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/on-my-relation-to-effective-altruism/</guid><pubDate>Sun, 29 Jun 2025 00:00:00 +0000</pubDate></item><item><title>On question taste</title><link>https://isabeldahlgren.github.io/on-question-taste/</link><description>&lt;h3 id="i">I. &lt;a href="#i" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Children like asking questions. Some of their questions are very hard: &amp;ldquo;Why do people die?&amp;rdquo;. Others questions expose our biases: &amp;ldquo;Couldn&amp;rsquo;t we invite the homeless man to dinner?&amp;rdquo; Then there are all the annoying questions: &amp;ldquo;When will we arrive?&amp;rdquo;&lt;/p>
&lt;p>In my experience, school didn&amp;rsquo;t teach us to ask questions - we were just taught how to answer them. To pass the test, you only had to memorise the material in the textbook.&lt;/p>
&lt;p>Perhaps this isn&amp;rsquo;t too surprising. It&amp;rsquo;s hard objectively measuring someone&amp;rsquo;s ability to ask questions. But above all, educational institutions serve many different purposes, apart from spreading knowledge&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>. Schools should also produce good citizens. A society full of radical skeptics wouldn&amp;rsquo;t function properly (hence the execution of Socrates). So children might hear that some questions are meaningless or irrelevant, even in school.&lt;/p>
&lt;h3 id="ii">II. &lt;a href="#ii" class="hash">#&lt;/a>&lt;/h3>
&lt;p>On two different occasions, I&amp;rsquo;ve heard two distinguished professors at ETH mention that students ask too few questions. They weren&amp;rsquo;t referring to the specific questions about the lecture, but to something like &lt;a href="https://www.lesswrong.com/posts/Thwfy4gNFx9kHgvov/research-hamming-questions">Research Hamming questions&lt;/a>. For example, one could ask about improvements of a particular result or whether a proof technique generalises.&lt;/p>
&lt;p>They make a good point. The ability to ask interesting, open-ended questions seems like a key research skill; call it question taste. Intuitively, a person with question taste is like a child with expert domain knowledge.&lt;/p>
&lt;p>Certain groups of people are known for their question taste. For example, consider journalists. They have undergone extensive training in asking questions as part of their university degree. Moreover, they constantly practise the skill of asking questions at work. No wonder they get pretty good at it. Similarly, researchers, psychologists, essayists and investors tend to have good question taste, as questioning is part of their job.&lt;/p>
&lt;h3 id="iii">III. &lt;a href="#iii" class="hash">#&lt;/a>&lt;/h3>
&lt;p>I assume, perhaps somewhat naïvely, that acquiring question taste is mostly about deliberate practice. The implementational details will depend on the domain you&amp;rsquo;re working in, though.&lt;/p>
&lt;p>A basic strategy might be to identify the people in your area who seem to be asking the right questions and try copying what they do. For example, I&amp;rsquo;d like to formulate good questions about research agendas in AI safety someday, so I&amp;rsquo;ll often make note of questions raised by leading researchers in podcasts, blog posts or papers. Sometimes I write down own questions and discuss them with others for feedback, although this is more time-consuming.&lt;/p>
&lt;p>Having a safe environment in which to practise asking questions - a training ground - also seems important. If you&amp;rsquo;re a student, you can attend student-run seminars, where you might feel more comfortable asking questions. Regardless of the kind of questions you aim to ask, you need to find a community to give you feedback. And hopefully, you get some answers too.&lt;/p>
&lt;p>As you practise asking questions, you&amp;rsquo;ll inevitably ask some &amp;ldquo;bad questions&amp;rdquo;. Part of the reason why children are so inquisitive is their lack of self-consciousness. They don&amp;rsquo;t care if their questions make them sound stupid. So, in order to ask good questions, make a conscious effort, find a supportive community and maybe don&amp;rsquo;t take yourself too seriously.&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>This is argued in Bryan Caplan&amp;rsquo;s book &lt;a href="https://en.wikipedia.org/wiki/The_Case_Against_Education">The Case Against Education&lt;/a>.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/on-question-taste/</guid><pubDate>Sun, 22 Jun 2025 00:00:00 +0000</pubDate></item><item><title>A taxonomy of examples</title><link>https://isabeldahlgren.github.io/a-taxonomy-of-examples/</link><description>&lt;p>I&amp;rsquo;m mildly obsessed with examples. Whenever I feel confused, it&amp;rsquo;s often because I don&amp;rsquo;t know enough examples. Proofs can be confusing too, if some step is poorly explained. But that kind of confusion tends to be local. You can still have a good grasp of the theory. If you don&amp;rsquo;t know enough examples, you feel generally lost. Textbooks lacking in examples end up being dry. In contrast, books with well-chosen examples are a pleasure to read&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>.&lt;/p>
&lt;p>According to the Cambridge Dictionary, an example is &amp;ldquo;something that is typical of the group of things that it is a member of&amp;rdquo;.&lt;/p>
&lt;p>In mathematics, there are many kinds of examples, and they all serve different purposes. Traditionally, fields like topology and measure theory place a greater emphasis on counter examples. In algebra, there are more prototypical examples; definitions are often followed by an exhaustive list of objects of that category&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>. In the context of mathematics, the dictionary definition is too simplistic! So here is an attempt at classifying the main kinds of examples, or, alternatively, an examples appreciation post.&lt;/p>
&lt;h3 id="understanding-definitions">Understanding definitions &lt;a href="#understanding-definitions" class="hash">#&lt;/a>&lt;/h3>
&lt;p>First, there are the examples helping us understand definitions.&lt;/p>
&lt;ul>
&lt;li>Prototypical examples: examples just complex enough to capture the essential properties of an object.
&lt;ul>
&lt;li>Fundamental groups: the fundamental group of the circle $\pi_1(\mathbb{S}^1, x_0)$.&lt;/li>
&lt;li>Transcendental field extensions: $\mathbb{Q}(\pi)$.&lt;/li>
&lt;li>Modular forms: the Poincaré series $P_{m, k}(z)$.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>Canonical examples: the canonical construction of something - the construction you get which involves the least arbitrariness.
&lt;ul>
&lt;li>Stochastic processes: the coordinate process.&lt;/li>
&lt;li>Embeddings into bi-duals: take the embedding $V \to V^{**}$ sending $v$ to the evaluation.&lt;/li>
&lt;li>Rings: the integers $\mathbb{Z}$.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>Foundational examples: examples of special interest and which form the basis for further theory.
&lt;ul>
&lt;li>Anything analysis: the Gaussian $\phi(x) = e^{-x^2/2}$.&lt;/li>
&lt;li>Stochastic processes: the simple random walk on $\mathbb{Z}$.&lt;/li>
&lt;li>Modular forms: the $j$–invariant.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>Basic examples: the simplest possible instance of an object. The kind of example an uncreative student might cite in an exam&lt;sup id="fnref:3">&lt;a href="#fn:3" class="footnote-ref" role="doc-noteref">3&lt;/a>&lt;/sup>.
&lt;ul>
&lt;li>Groups: the trivial group $G := \{e\}$.&lt;/li>
&lt;li>Martingales: the martingale $(X_t)_{t \ge 0}$ where $X_t \equiv 1$ for all $t$.&lt;/li>
&lt;li>Banach space: just take $\mathbb{R}$.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>Non-examples: objects which are not instances of something.
&lt;ul>
&lt;li>A non-ring: the natural numbers $\mathbb{N}$.&lt;/li>
&lt;li>A non-tempered distribution: the function $e^t$.&lt;/li>
&lt;li>A non-Artinian ring: the integers $\mathbb{Z}$.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>Computational examples: examples involving a computation and that illustrate how to work with a given object.
&lt;ul>
&lt;li>Compute $\mathbb{R}[x] \otimes \mathbb{C}$.&lt;/li>
&lt;li>Find the Euler product expansion for the Dirichlet generating series of the Möbius function $\mu = 1^{-*}$&lt;/li>
&lt;li>Check that $c^{-1/2} W_{ct}$ is again Brownian motion.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>Pathological examples: examples illustrating distinctions between different notions, or the limitations of a certain concept.
&lt;ul>
&lt;li>Differentiability $\neq$ continuity: consider the Weierstrass function.&lt;/li>
&lt;li>Connectedness $\neq$ path connectedness: because the Topologist&amp;rsquo;s sine curve.&lt;/li>
&lt;li>Lebesgue integral $\neq$ Riemann integral: try integrating the Dirichlet function, the characteristic function of the rationals.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>Toy examples: examples you can manipulate to have some desired properties. When looking for counter examples, you typically start with some toy example.
&lt;ul>
&lt;li>Random variable: define $$X := \begin{cases}\alpha, &amp; \text{with probability } p, \\\ \beta, &amp; \text{with probability } 1 - p.\end{cases}$$&lt;/li>
&lt;li>Measure theory: simple functions.&lt;/li>
&lt;li>Galois groups: consider $\mathrm{Gal}(\mathbb{Q}(\sqrt{\alpha}):\mathbb{Q})$.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>Real-world examples: a real-world scenario involving the object of interest. These help motivate the study of the given object and are useful for gaining intuition.
&lt;ul>
&lt;li>Stochastic processes: a Poisson process as a description of the number of raindrops falling in a given square.&lt;/li>
&lt;li>PDE: the heat equation.&lt;/li>
&lt;li>Graph theory: a network models the flow of some fluid.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ul>
&lt;h3 id="understanding-results">Understanding results &lt;a href="#understanding-results" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Second, there are the examples helping us understand results. I&amp;rsquo;ll count applications of theorems among these kinds of examples.&lt;/p>
&lt;ul>
&lt;li>Counter examples: examples illustrating how a given result breaks down if we drop assumptions&lt;sup id="fnref:4">&lt;a href="#fn:4" class="footnote-ref" role="doc-noteref">4&lt;/a>&lt;/sup>.
&lt;ul>
&lt;li>Fatou&amp;rsquo;s lemma: we really do need non-negativity. Consider the sequence $f_n := - 1_{[n, n+1]}$.&lt;/li>
&lt;li>Open mapping theorem: surjectivity is necessary, since the zero map isn&amp;rsquo;t open.&lt;/li>
&lt;li>Nullstellensatz: the field $K$ must be algebraically closed. For $K = \mathbb{R}$, the ideal $I = (x^2 + 1)$ is maximal but not of the form $(x - a)$.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>Basic applications: checking that the conditions of a theorem are satisfied, apply the theorem and see what you get. These examples are particularly useful if the theorem statement seems involved.
&lt;ul>
&lt;li>Hurewicz&amp;rsquo; theorem: use it to find $H_1(\mathbb{S}^n)$.&lt;/li>
&lt;li>Dedekind-Kummer: compute the factorisation of $(p)$ in a ring of integers.&lt;/li>
&lt;li>Dirichlet&amp;rsquo;s unit theorem: the theorem allows us to verify that $\mathbb{Q}(\sqrt{d})$ has finite unit group.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>Non-basic applications: or corollaries. Deduce something interesting from a theorem.
&lt;ul>
&lt;li>Optional stopping: allows you to compute laws of hitting times.&lt;/li>
&lt;li>The ring of integers is Dedekind: this gives us a satisfying proof of Fermat&amp;rsquo;s theorem for primes that are sums of squares.&lt;/li>
&lt;li>Hausdorff-Young: an application of Riesz-Thorin tells us that the Fourier transform defines a bounded linear operator from $L^p$.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>Amusing applications: examples mentioned during lectures for amusement.
&lt;ul>
&lt;li>Borsuk-Ulam: there are two antipodal points on earth with the same temperature.&lt;/li>
&lt;li>Mean-value theorem: you can catch someone over-speeding using the mean-value theorem.&lt;/li>
&lt;li>Four colour theorem: you can colour a map using just four colours so no two adjacent countries have the same colour.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>Satisfying applications: prove a well-known theorem as an application of the fancy theory you developed.
&lt;ul>
&lt;li>The insolvability of the quintic: &amp;ldquo;just&amp;rdquo; an example application of Galois theory.&lt;/li>
&lt;li>The central limit theorem: falls out of the computation of the characteristic function of the scaled sum.&lt;/li>
&lt;li>The fundamental theorem of algebra: an application of Liouville&amp;rsquo;s theorem.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ul>
&lt;p>Are examples the core of mathematics? The answer is very much a matter of personal taste. I&amp;rsquo;d say yes. Apparently von Neumann once said&lt;/p>
&lt;blockquote>
&lt;p>&amp;ldquo;In mathematics you don&amp;rsquo;t understand things. You just get used to them.&amp;rdquo;&lt;/p>
&lt;/blockquote>
&lt;p>There&amp;rsquo;s some truth to this. And the way we get used to things is by studying examples.&lt;/p>
&lt;p>&lt;em>Thanks to Alois Schaffler for suggesting the last kind of example.&lt;/em>&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>&lt;em>Basic Category Theory&lt;/em> or &lt;a href="https://venhance.github.io/napkin/Napkin.pdf">The Napkin&lt;/a> are amazing in this regard.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>In &lt;em>Abstract Algebra&lt;/em> by Dummit and Foote, some definitions are followed by more than ten (!) examples.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:3">
&lt;p>I once cited $\mathbb{Q}$ as an example of a non-Banach space in an exam. The graders probably weren&amp;rsquo;t particularly happy with my response, but I received full points.&amp;#160;&lt;a href="#fnref:3" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:4">
&lt;p>In my experience, memorising counter examples is the best way to memorise assumptions of a theorem. This is somewhat counter intuitive, given that this involves memorising more information.&amp;#160;&lt;a href="#fnref:4" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/a-taxonomy-of-examples/</guid><pubDate>Sun, 15 Jun 2025 00:00:00 +0000</pubDate></item><item><title>On the joy of customising</title><link>https://isabeldahlgren.github.io/on-the-joy-of-customising/</link><description>&lt;p>Why do so many people use Safari, the jarring black VSCode colour theme and Overleaf? Because it&amp;rsquo;s the default option.&lt;/p>
&lt;p>Switching to a non-default option requires conscious effort. First, you need to realise that your current setup is suboptimal: the probability of your current setup being the optimal one is infinitesimal. But if you knew about the benefits of making a few small changes, you&amp;rsquo;d already have made them. So you do need to believe that switching is worthwhile, which, indeed, is a leap of faith.&lt;/p>
&lt;p>The pros are all believers. Most software engineers customise their code editors and terminal shells heavily. If you spend all your working hours working on a computer, spending one Sunday improving your working environment is well worth it. Similarly, I know some professors at ETH are particular about their writing tools, only using fountain pens and high-end chalk.&lt;/p>
&lt;p>But if you&amp;rsquo;re neither a software engineer nor an ETH professor, tinkering with command-line interface (CLI) tools and splurging money on fountain pens might feel indulgent. This makes up another psychological barrier towards switching from the default. In fact, you might question whether you&amp;rsquo;re even entitled to use the tools of the pros. And to some extent, this sentiment is valid.&lt;/p>
&lt;p>However, I still think most people err on the side of customising too little, especially when it comes to tech. Most of us are heavy tech users; the average screen time globally seems to be over 6h. When I say someone has a good tech setup, I mean that they use appropriate software for their computer-related tasks efficiently. It is much like having a good office with a comfortable chair and a proper desk. However, it need not require expensive purchases.&lt;/p>
&lt;p>Adding a few selected tools to your setup can massively improve productivity. There are some general programs useful to any computer user, like Google Drive, Google Calendar and Raycast. I also think having a dedicated note-taking system, whether it be in Obsidian, Notion or Apple Notes, pays off. However, you&amp;rsquo;ll get the most leverage from finding the tools relevant to the tasks you&amp;rsquo;re doing and mastering them.&lt;/p>
&lt;p>For example, I mostly use my MacBook for typesetting, writing and coding. For typesetting, I use NeoVim with custom snippets, inspired by Gilles Castel&amp;rsquo;s iconic &lt;a href="https://castel.dev/post/lecture-notes-1/">VimTeX setup&lt;/a>. Nowadays, I also use Typst rather than LaTeX. Although I don&amp;rsquo;t necessarily type faster in Typst, I spend much less time debugging cryptic error messages. All in all, I estimate that these changes have doubled my writing speed. There are also ergonomic benefits to this setup, as opposed to just writing in Overleaf. As for coding, I&amp;rsquo;ve found Cursor to be a complete game changer, allowing me to implement experiments in about a tenth of the time. Other than that, I&amp;rsquo;m &amp;ldquo;just&amp;rdquo; using NeoVim keybindings in VSCode and a few CLI tools allowing me to navigate between folders more easily. As can be seen from &lt;a href="https://www.alignmentforum.org/posts/dZFpEdKyb9Bf4xYn7/tips-for-empirical-alignment-research">this post&lt;/a>, I have a lot to learn here.&lt;/p>
&lt;p>Furthermore, customising your computer makes for a much better user experience. I value good design, so having my preferred colour scheme in Obsidian, VSCode and WezTerm makes my computer a joy to use. I also use the Zen Browser for a cleaner user interface.&lt;/p>
&lt;p>Regardless of whether you&amp;rsquo;re a software aesthete or not, it&amp;rsquo;s worth exploring tech tools relevant to your work.&lt;/p></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/on-the-joy-of-customising/</guid><pubDate>Sun, 08 Jun 2025 00:00:00 +0000</pubDate></item><item><title>Consume less AI safety news</title><link>https://isabeldahlgren.github.io/consume-less-ai-safety-news/</link><description>&lt;p>It&amp;rsquo;s hard staying on top of all the AI safety news. Some people, like &lt;a href="https://thezvi.wordpress.com/about/">Zvi&lt;/a>, have basically made this their full-time job.&lt;/p>
&lt;p>A common failure mode for forming views on AI safety is consuming too much information. It&amp;rsquo;s a tendency I&amp;rsquo;ve observed in myself, as well as in others in the AI safety community.&lt;/p>
&lt;p>I think it comes from the urge to solve the AI alignment problem quickly. It&amp;rsquo;s also an exciting time to be working in AI safety, with many rapid advancements being made. Also, since AI safety is something AI, there&amp;rsquo;s a lot of general excitement surrounding the area.&lt;/p>
&lt;p>So we might imagine someone who starts following all the leading researchers on Twitter, listening to &lt;em>The Cognitive Revolution&lt;/em> while commuting and reading LessWrong posts before going to bed. Or maybe they&amp;rsquo;ll have lengthy discussions about AI governance in WhatsApp groups and watch Robert Miles&amp;rsquo; YouTube videos over meals.&lt;/p>
&lt;p>But if you&amp;rsquo;re looking to gain a deeper understanding of the AI safety landscape, this isn&amp;rsquo;t enough. You&amp;rsquo;d have to engage more with the material, shifting your creation-to-consumption ratio towards more creation. And you&amp;rsquo;d have to discuss your views with people in real life and, if possible, engage in a local community&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>. A common piece of advice for people aiming to become experts is to focus on &lt;a href="https://forum.effectivealtruism.org/posts/ckj6Moau9qpYArHWc/want-to-be-an-expert-build-deep-models">building deep models&lt;/a>.&lt;/p>
&lt;p>Even if your goal is just to get the big picture, the above approach seems needlessly high effort. There are many excellent resources summarising the main ideas in AI safety, such as the the 80,000 hours &lt;a href="https://80000hours.org/problem-profiles/risks-from-power-seeking-ai/">problem profile&lt;/a>. It&amp;rsquo;s a 60 min read with a very high signal-to-noise ratio.&lt;/p>
&lt;p>Either way, binging AI safety-related material to won&amp;rsquo;t help fix the AI alignment problem. It&amp;rsquo;s stressful, and perhaps self-defeating. Given that the AI safety landscape is changing so rapidly, much of what we&amp;rsquo;re seeing is noise. To gain conceptual clarity, perhaps you&amp;rsquo;ll benefit from consuming less AI-safety related news. Unless you&amp;rsquo;re working full-time on AI safety, either as a policy-maker or a researcher, this probably won&amp;rsquo;t negatively influence your ability to do good work.&lt;/p>
&lt;p>Hard problem require careful reflection. Although AI advances fast, we must think slowly about how to ensure things go well.&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>My friend &lt;a href="https://mkodama.org/">Miles Kodama&lt;/a> put it well: &amp;ldquo;It is easy to BS to a screen&amp;rdquo;.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/consume-less-ai-safety-news/</guid><pubDate>Sun, 01 Jun 2025 00:00:00 +0000</pubDate></item><item><title>Understanding the AI alignment problem</title><link>https://isabeldahlgren.github.io/understanding-the-ai-alignment-problem/</link><description>&lt;p>Broadly speaking, the AI alignment problem refers to the problem of ensuring AI systems do what we want them to do. I like the definition used by Anthropic &lt;a href="https://www.anthropic.com/news/core-views-on-ai-safety#:~:text=build%20safe%2C%20reliable%2C%20and%20steerable%20systems%20when%20those%20systems%20are%20starting%20to%20become%20as%20intelligent%20and%20as%20aware%20of%20their%20surroundings%20as%20their%20designers">here&lt;/a>:&lt;/p>
&lt;blockquote>
&lt;p>“build safe, reliable, and steerable systems when those systems are starting to become as intelligent and as aware of their surroundings as their designers”&lt;/p>
&lt;/blockquote>
&lt;p>The general idea is pretty simple. But there are many things to unpack here. Understanding what this means in practise is hard. First, why might we end up training unsafe AI systems? Even if you think this is possible, it&amp;rsquo;s not clear what regulations might be appropriate. Here are some metaphors I found particularly useful for gaining a deeper understanding of some aspects of AI alignment.&lt;/p>
&lt;h3 id="the-eight-year-old-ceo">The eight-year-old CEO &lt;a href="#the-eight-year-old-ceo" class="hash">#&lt;/a>&lt;/h3>
&lt;p>In this excellent &lt;a href="https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/">blog post&lt;/a>&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>, Ajeya Cotra asks you to imagine the following scenario:&lt;/p>
&lt;blockquote>
&lt;p>Imagine you are an eight-year-old whose parents left you a $1 trillion company and no trusted adult to serve as your guide to the world. You must hire a smart adult to run your company as CEO, handle your life the way that a parent would (e.g. decide your school, where you’ll live, when you need to go to the dentist), and administer your vast wealth (e.g. decide where you’ll invest your money). You have to hire these grownups based on a work trial or interview you come up with &amp;ndash; you don&amp;rsquo;t get to see any resumes, don&amp;rsquo;t get to do reference checks, etc. Because you&amp;rsquo;re so rich, tons of people apply for all sorts of reasons.&lt;/p>
&lt;p>Your candidate pool includes:&lt;/p>
&lt;p>&lt;strong>Saints&lt;/strong> &amp;ndash; people who genuinely just want to help you manage your estate well and look out for your long-term interests.&lt;/p>
&lt;p>&lt;strong>Sycophants&lt;/strong> &amp;ndash; people who just want to do whatever it takes to make you short-term happy or satisfy the letter of your instructions regardless of long-term consequences.&lt;/p>
&lt;p>&lt;strong>Schemers&lt;/strong> &amp;ndash; people with their own agendas who want to get access to your company and all its wealth and power so they can use it however they want.&lt;/p>
&lt;/blockquote>
&lt;p>Deciding whom to hire is extremely difficult - you&amp;rsquo;re just eight! In this analogy, humanity is the eight-year-old CEO. Hiring a candidate is like training the superhuman AI model which will best serve our interests.&lt;/p>
&lt;h3 id="building-planes">Building planes &lt;a href="#building-planes" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Suppose aerospace engineers have developed a new plane model. It&amp;rsquo;s energy-efficient, cheap to produce and has increased passenger comfort. However, the engineers don&amp;rsquo;t fully understand the internal workings of the engine. During testing, the engine seems to work alright. The engineers identified a few issues, but these could all be fixed quite easily. Would you be comfortable with this plane being produced for commercial use?&lt;/p>
&lt;p>Here, the AI models are like the engines. We know how to build AI models capable of writing poetry and conducting PhD-level research, but our understanding of how these models learn is relatively limited&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>. So, we should probably reflect more carefully on how we&amp;rsquo;re deploying LLMs.&lt;/p>
&lt;h3 id="drug-regulation">Drug regulation &lt;a href="#drug-regulation" class="hash">#&lt;/a>&lt;/h3>
&lt;p>It takes years for newly developed drugs to reach consumers. First, you need preclinical trials. Then you carry out clinical trials in three distinct phases. This done, you need the approval of a regulatory agency, such as the FDA. It&amp;rsquo;s not uncommon for the entire process to take 10-15 years. Given that we subject drugs to such rigorous testing, why not do the same for LLMs?&lt;/p>
&lt;p>I first heard this analogy in &lt;a href="https://open.spotify.com/episode/38R2p5TG0uO02q3xybxsvR?si=7f8fa707ea174823">this podcast&lt;/a> (Swedish, sorry), where Olle Häggström makes the case for AI slowdown. I think the above analogy is quite compelling, although I don&amp;rsquo;t fully share his views.&lt;/p>
&lt;h3 id="the-hustler">The hustler &lt;a href="#the-hustler" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Imagine a person trying to learn a new skill, say playing Go. He has memorised all textbooks on Go ever published by heart, as well as all the games played by professional Go players. Moreover, he&amp;rsquo;s extremely hardworking: he plays roughly 1.5 million games against himself per day. (He doesn&amp;rsquo;t need any sleep, and he happens to think very quickly.) Given the amount of practise he gets, how can normal humans hope to defeat him?&lt;/p>
&lt;p>Here, the hustler is similar to an RL system. To me, this analogy makes the prospect of an intelligence explosion seem much more plausible&lt;sup id="fnref:3">&lt;a href="#fn:3" class="footnote-ref" role="doc-noteref">3&lt;/a>&lt;/sup> and less sci-fi-ish.&lt;/p>
&lt;h3 id="final-thoughts">Final thoughts &lt;a href="#final-thoughts" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Finally, to understand the alignment problem, I think it&amp;rsquo;s also worth appreciating the potential impact of superhuman AGI. To me it seems like superhuman AI could be about as transformative as the industrial revolution. At the very least, I&amp;rsquo;d expect it to be as impactful as electricity. So, the ensuring the development of AGI &amp;ldquo;goes well&amp;rdquo; seems like a key problem of our time.&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>This is one of my all-time favourite pieces on AI alignment.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>I don&amp;rsquo;t count &amp;ldquo;just backpropagate&amp;rdquo; as a satisfactory answer!&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:3">
&lt;p>This analogy was inspired by a conversation with Samuel Ratnam at EAG.&amp;#160;&lt;a href="#fnref:3" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/understanding-the-ai-alignment-problem/</guid><pubDate>Sun, 25 May 2025 00:00:00 +0000</pubDate></item><item><title>A semester paper retrospective</title><link>https://isabeldahlgren.github.io/a-semester-paper-retrospective/</link><description>&lt;p>This is a follow-up on my &lt;a href="https://isabeldahlgren.github.io/two-results-from-probabilistic-number-theory/">previous article&lt;/a>, where I share some thoughts on the process of writing a semester paper.&lt;/p>
&lt;p>Lots of disclaimers: this is all very specific to my experience: the topic of my paper, my supervisor, my workload in others courses, etc. Also, I&amp;rsquo;m certainly in no position to give advice on mathematical writing; these are just reflections on what worked and what didn&amp;rsquo;t work for me.&lt;/p>
&lt;h3 id="on-writing-papers">On writing papers &lt;a href="#on-writing-papers" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Here are some principles for writing mathematical papers that I tried to follow, mostly based on feedback from my supervisors. I received plenty of useful feedback, but here are the points I found the most useful:&lt;/p>
&lt;ul>
&lt;li>Top-down exposition: concretely, this meant explaining how all lemmas would come together before proving each lemma. This minimises the amount of task-switching for the reader. In addition, this helps motivate the lemmas and allows the reader to decide which parts to skip.&lt;/li>
&lt;li>Short proofs: as a rule of thumb, I tried breaking up proofs in such a way that proofs of subresults would fit on one page. To my surprise, in most cases, this was doable.&lt;/li>
&lt;li>Reducing cognitive load: a piece of feedback I received from Vivian was to try reducing the cognitive load of the reader; it&amp;rsquo;s also a major theme &lt;a href="https://ngtriant.github.io/notes/practical_suggestions_for_mathematical_writing.pdf">here&lt;/a>. This idea really resonated with me. Concretely, this meant doing things like:
&lt;ul>
&lt;li>Restating parameters: for example, one might go &amp;ldquo;Recalling that $X = \text{definition of }X$, we obtain&amp;hellip;&amp;rdquo; rather than &amp;ldquo;By our choice of $X$, &amp;hellip;.&amp;rdquo;.&lt;/li>
&lt;li>Creating indices of notation and parameters: very relevant if your proof is notation-heavy.&lt;/li>
&lt;li>More descriptive text: rather than writing &amp;ldquo;we have&amp;rdquo; or &amp;ldquo;thus&amp;rdquo; before a computation, I tried describing which device I was using. Mentioning that you&amp;rsquo;re using, say, a union bound doesn&amp;rsquo;t take up much additional space on the page, and it makes for a better reading experience. (Also, if you&amp;rsquo;re doing analytic number theory, you need more alternatives to &amp;ldquo;we have&amp;rdquo;!)&lt;/li>
&lt;li>Indicating what won&amp;rsquo;t be proved: in order to keep the report at a reasonable length, I had to omit the proofs of some preliminary results and avoid repeating similar arguments. I tried making this clear to the reader, so I wouldn&amp;rsquo;t leave them hanging.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>What I would have found useful: initially, I didn&amp;rsquo;t know how many details to include. If I&amp;rsquo;d try explaining all details I&amp;rsquo;d struggled with when reading the proof, the text would be too verbose! Vivian had some good advice here: aim for the level of detail you&amp;rsquo;d liked when reading it for the first time.&lt;/li>
&lt;/ul>
&lt;p>For future papers, I&amp;rsquo;m considering trying the following:&lt;/p>
&lt;ul>
&lt;li>Including dependency graphs: although a bit unconventional, these can be tremendously helpful for long and convoluted proofs. See e.g. my piece on &lt;a href="https://isabeldahlgren.github.io/hunting-dependencies/">dependencies&lt;/a>.&lt;/li>
&lt;li>Commenting on dead ends: a few words about why the naïve approach fails can be very illuminating. This can help motivate the use of some very complicated tool and potentially save the reader a lot of time. While remarks on failed proof attempts can sometimes be found in the &amp;ldquo;Discussion&amp;rdquo; section of a paper, I think like they aren&amp;rsquo;t given as much attention as they deserve (an unfortunate instance of &lt;a href="https://en.wikipedia.org/wiki/Publication_bias">publication bias&lt;/a>).&lt;/li>
&lt;li>Reducing cognitive load, even more: if I&amp;rsquo;d kept this in mind during the entire writing process, I think I&amp;rsquo;d done a few things differently. For further ideas on how to reduce cognitive load, I highly recommend the previously mentioned &lt;a href="https://ngtriant.github.io/notes/practical_suggestions_for_mathematical_writing.pdf">piece&lt;/a>.&lt;/li>
&lt;/ul>
&lt;h3 id="lessons">Lessons &lt;a href="#lessons" class="hash">#&lt;/a>&lt;/h3>
&lt;p>If I were to write a thesis or paper again, I would have done a lot differently. Here are the main changes:&lt;/p>
&lt;ul>
&lt;li>Do it in one go: it would have been much more enjoyable and efficient doing most of the semester paper within a much shorter period of time. If you&amp;rsquo;re trying to understand an involved argument, you need all the relevant notions floating around in your brain at the same time. Don&amp;rsquo;t read one lemma per week.&lt;/li>
&lt;li>Request additional feedback: if your supervisor is kind to offer additional feedback, that&amp;rsquo;s extremely valuable.&lt;/li>
&lt;li>As soon as you get it, typeset: you&amp;rsquo;ll have to do it anyway, and you might as well do it when you understand it. This also helps corroborate your understanding.&lt;/li>
&lt;li>Know when to ask for help: I wasn&amp;rsquo;t sure how long it was reasonable for me to be stuck on a particular passage before asking for help, and I know supervisors have different preferences here. Trivial fix: ask your supervisor &amp;ldquo;For how long should I be stuck before asking for help?&amp;rdquo;&lt;/li>
&lt;li>Knowing how to ask for help: at first, I&amp;rsquo;d ask questions during meetings. But I soon began emailing a list of questions to my supervisor a few days before our check-ins. That way, I think we both got more out of the meetings.&lt;/li>
&lt;li>Typesetting takes time: it seems to be a law of nature that TeX:ing always takes longer than expected. This happened as I was writing my bachelor thesis too. Despite having a pretty sophisticated NeoVim LaTeX setup with snippets, typesetting took twice as long as I expected. For future papers, I&amp;rsquo;ll probably go with &lt;a href="https://typst.app/">Typst&lt;/a>.&lt;/li>
&lt;/ul></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/a-semester-paper-retrospective/</guid><pubDate>Sun, 18 May 2025 00:00:00 +0000</pubDate></item><item><title>Two results from probabilistic number theory</title><link>https://isabeldahlgren.github.io/two-results-from-probabilistic-number-theory/</link><description>&lt;p>I recently wrote a semester paper on probabilistic number theory. I&amp;rsquo;m very grateful to both of my supervisors, Dr. Vivian Kuperberg and Prof. Dr. Emmanuel Kowalski, for their insights and suggestions. Here&amp;rsquo;s the abstract:&lt;/p>
&lt;blockquote>
&lt;p>This report is an exposition of two central limit theorems in probabilistic number theory. We begin by introducing preliminary results from number theory and probability theory. Then we prove the Erdős-Kac theorem for the asymptotic behaviour of the prime divisor counting function. The majority of the report is devoted to Radziwiłł and Soundarajan&amp;rsquo;s recent proof of the Selberg central limit theorem for $\log |\zeta(\frac{1}{2} + it)|$.&lt;/p>
&lt;/blockquote>
&lt;p>You&amp;rsquo;re most welcome to have a look at the actual &lt;a href="https://drive.google.com/file/d/1AVEBwfBkdbcflnOQLYACcBXemqSPWzXP/view?usp=sharing">semester paper&lt;/a>. Here I&amp;rsquo;d like to give a more informal discussion of its contents.&lt;/p>
&lt;h3 id="deterministic-number-theory">&amp;ldquo;Deterministic&amp;rdquo; number theory &lt;a href="#deterministic-number-theory" class="hash">#&lt;/a>&lt;/h3>
&lt;p>A central theme in analytic number theory is deriving estimates for averages over arithmetic functions, e.g. something like&lt;/p>
$$\sum_{n \le x} f(n) = \text{main term} + \text{error term},$$&lt;p>where $f: \mathbb{N} \to \mathbb{C}$ is some arithmetic function. Many &amp;ldquo;named theorems&amp;rdquo; are estimates of the above form. For example:&lt;/p>
&lt;ul>
&lt;li>If we let $f(n) = x^{-1} 1_P(n)$, where $1_P$ is the indicator of the set of prime numbers, then we get &lt;a href="https://terrytao.wordpress.com/2013/12/11/mertens-theorems/">Mertens&amp;rsquo; second theorem&lt;/a>.&lt;/li>
&lt;li>If instead $f(n) = d(n)$, the function counting the number of divisors of a positive integer $n$, then a &lt;a href="https://mathworld.wolfram.com/DirichletDivisorProblem.html">theorem of Dirichlet&lt;/a> asserts that&lt;/li>
&lt;/ul>
$$\sum_{n \le x} f(n) = x \log x + (2 \gamma - 1)x + O(\sqrt{x}).$$&lt;ul>
&lt;li>Actually, the &lt;a href="https://en.wikipedia.org/wiki/Prime_number_theorem#Proof_sketch">Prime number theorem&lt;/a> is equivalent to an estimate of the above form. Just take $f$ to be the von Mangoldt function $\Lambda$, defined by&lt;/li>
&lt;/ul>
$$\Lambda(n) := \begin{cases}\log p, &amp; n = p^k \ \text{for some prime } p, \\\\ 0, &amp; \text{otherwise}.\end{cases}$$&lt;h3 id="probabilistic-number-theory">Probabilistic number theory &lt;a href="#probabilistic-number-theory" class="hash">#&lt;/a>&lt;/h3>
&lt;p>In my semester paper, I focused on two central limit theorems from probabilistic number theory: the Erdős–Kac and Selberg central limit theorems. Here, the approach is slightly different.&lt;/p>
&lt;h4 id="sums-and-integrals-as-expectations">Sums and integrals as expectations &lt;a href="#sums-and-integrals-as-expectations" class="hash">#&lt;/a>&lt;/h4>
&lt;p>First, we let $f: \Omega \to \mathbb{C}$ be an arithmetically defined quantity more generally; typically $\Omega = \mathbb{N}$ or $\mathbb{R}$. So we could take $f$ to be an arithmetic function, but now we also allow for functions of arithmetic functions. For example, if we take $g: \mathbb{N} \to \mathbb{C}$ to be an arithmetic function and post-compose with $x \mapsto x^2 / \sqrt{\log \log x}$, then we could have $f: \mathbb{N} \to \mathbb{C}$ be a weird-looking expression defined by&lt;/p>
$$f(n) := \frac{g(n)^2}{\sqrt{\log \log n}}.$$&lt;p>To highlight the connection with probability theory, fix a positive integer $x$ and let $U_x$ be a random variable uniformly distributed on $\\{1, ..., x\\}$. Then we obtain&lt;/p>
$$\frac{1}{x} \sum_{n \le x} f(n) = \mathbb{E}(f(U_x)).$$&lt;p>So, estimating $x^{-1} \sum_{n \le x} f(n)$ is the same as estimating an expectation.&lt;/p>
&lt;p>Similarly, the problem of estimating the integral&lt;/p>
$$\frac{1}{x} \int_0^x f(y) \ dy$$&lt;p>also comes down to estimating an expectation. In the continuous case, let $x$ be an arbitrary positive real number and take $U_x$ to be a random variable uniformly distributed over $[0, x]$. Then the above integral is precisely $\mathbb{E}(f(U_x))$.&lt;/p>
&lt;h4 id="natural-questions">Natural questions &lt;a href="#natural-questions" class="hash">#&lt;/a>&lt;/h4>
&lt;p>This naturally prompts the following questions:&lt;/p>
&lt;ul>
&lt;li>Could we also estimate the variance of $f(U_x)$?&lt;/li>
&lt;li>Could we even say something about the asymptotic distribution of $f(U_x)$ as $x \to \infty$?&lt;/li>
&lt;/ul>
&lt;p>Ideally, we can prove something like this:&lt;/p>
&lt;p>&amp;ldquo;Let $U_x: \Omega \to \Omega_x$ denote a random variable uniformly distributed on $\Omega_x := \\{1, ..., x\\}$ (or $[0, x]$). Let $Y$ be a random variable with some simple distribution. Then as $x \to \infty$, we have&lt;/p>
$$f(U_x) \xrightharpoonup{} Y,$$&lt;p>where $\xrightharpoonup{}$ denotes convergence in distribution.&amp;rdquo;&lt;/p>
&lt;p>A remarkable fact is that some rather convoluted expressions involving arithmetic functions can converge in distribution to standard normal random variables as $x \to \infty$.&lt;/p>
&lt;h4 id="general-proof-strategy">General proof strategy &lt;a href="#general-proof-strategy" class="hash">#&lt;/a>&lt;/h4>
&lt;p>Proving results of this form involves machinery from both number theory and probability. We typically use number theory to massage the expression $f(U_x)$, so it can be approximated by some simpler non-arithmetic random variable. This done, we can use limit theorems from probability to finish off the proof.&lt;/p>
&lt;h3 id="the-erdős-kac-theorem">The Erdős-Kac theorem &lt;a href="#the-erd%c5%91s-kac-theorem" class="hash">#&lt;/a>&lt;/h3>
&lt;p>This serves as a good &amp;ldquo;toy example&amp;rdquo; for the proof strategy outlined above. Also, it&amp;rsquo;s not too difficult deriving a heuristic proof; for details, see page 14.&lt;/p>
&lt;p>First, some notation. Let $\omega$ denote the function indicating the number of distinct prime divisors of a positive integer. For example, $\omega(6) = 2$ since $6 = 2 \cdot 3$, while $\omega(9) = 1$, since $9 = 3^2$. Then we have:&lt;/p>
&lt;p>&lt;strong>Theorem.&lt;/strong> (Erdős-Kac, 1940). Let $U_n$ be a random variable uniformly distributed on $\{1, ..., n\}$ and define&lt;/p>
$$W_n := \frac{\omega(U_n) - \log \log n}{\sqrt{\log \log n}}.$$&lt;p>Then the sequence $(W_n)_{n \ge 1}$ of random variables converges in distribution to a standard normal random variable as $n \to \infty$.&lt;/p>
&lt;h3 id="selbergs-central-limit-theorem-for-log-zeta12--it">Selberg&amp;rsquo;s central limit theorem for $\log |\zeta(1/2 + it)|$ &lt;a href="#selbergs-central-limit-theorem-for-log-zeta12--it" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Let $\zeta$ denote the Riemann zeta function, defined by&lt;/p>
$$\zeta(n) := \sum_{n \ge 1} \frac{1}{n^s}.$$&lt;p>The Riemann hypothesis, perhaps the most famous open problem in mathematics, says that the only zeros of the zeta function are of the form $s = -2k$ for positive integers $k$ or lie on the critical strip $\\{1/2 + it : t \in \mathbb{R}\\}$.&lt;/p>
&lt;p>Now fix a positive number $T > 0$, and consider the random variable&lt;/p>
$$L_T := \log |\zeta(1/2 + iU_t)|,$$&lt;p>where $U_t$ is a random variable uniformly distributed on the interval $[-T, T]$. Considering just how hard the Riemann hypothesis is, it&amp;rsquo;s pretty surprising that the behaviour of $L_T$ is relatively well-understood. In fact, we have:&lt;/p>
&lt;p>&lt;strong>Theorem.&lt;/strong> (Selberg, 1946). Let $L_T$ be the random variable defined as above, and set&lt;/p>
$$M_T := \frac{L_T}{\sqrt{\frac{1}{2} \log \log T}}.$$&lt;p>Then the sequence $(M_T)_{T \ge 0}$ of random variables converges in distribution to a standard normal random variable as $T \to \infty$.&lt;/p>
&lt;p>In my semester paper, I cover the 2016 proof due to Radziwiłł and Soundarajan. Just as in the Erdős-Kac theorem, one can get an intuitive sense of why this result is plausible.&lt;/p>
&lt;h3 id="final-words">Final words &lt;a href="#final-words" class="hash">#&lt;/a>&lt;/h3>
&lt;p>For details, I encourage you to have a look at the actual paper. I hope you enjoy reading it as much as I enjoyed writing it.&lt;/p></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/two-results-from-probabilistic-number-theory/</guid><pubDate>Sun, 11 May 2025 00:00:00 +0000</pubDate></item><item><title>On decision fatigue</title><link>https://isabeldahlgren.github.io/on-decision-fatigue/</link><description>&lt;h3 id="i">I. &lt;a href="#i" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Life involves plenty of hard decisions. Should I switch jobs? Should I marry him? Should I move abroad? Then there are the &amp;ldquo;easy&amp;rdquo; decisions: which pyjama to wear or which brand of laundry detergent to buy. Although it doesn&amp;rsquo;t matter which option you choose here, choosing is hard. In fact, if you think of buying laundry detergent as an optimisation problem, taking into account things like social impact, price and quality, I&amp;rsquo;d expect the optimisation problem to be NP-hard! Here&amp;rsquo;s &lt;a href="https://thezvi.wordpress.com/2017/07/22/choices-are-bad/">Zvi&lt;/a>:&lt;/p>
&lt;blockquote>
&lt;p>When you have a choice, you must stop what you’re doing, and choose.
[&amp;hellip;]
Making a random slash arbitrary choice might not be hard (although sometimes it is) but first you have to choose to choose at random. If you choose not to decide, you still have made a choice. There’s no escape!&lt;/p>
&lt;/blockquote>
&lt;p>This leads to one of the big curses of adulthood - decision fatigue.&lt;/p>
&lt;h3 id="ii">II. &lt;a href="#ii" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Most of us are aware that choosing is draining. Steve Jobs wore black turtlenecks to reduce the number of decisions in a given day. &lt;em>Should I Stay or Should I Go&lt;/em> is basically a three-minute rant about how terrible it is making decisions&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>. The reason going to IKEA can be so tiring is that it involves making many small choices. Getting to choose among many options is only fun up to a certain point.&lt;/p>
&lt;p>According to &lt;a href="https://en.wikipedia.org/wiki/Decision_fatigue">Wikipedia&lt;/a>, decision fatigue is &amp;ldquo;the deteriorating quality of decisions made by an individual after a long session of decision making&amp;rdquo;. Decision fatigue arises from a broader phenomenon called ego depletion. Intuitively, this is the idea that willpower works like a battery. If we exert a lot of mental effort at one task, then we cannot exert as much mental effort at subsequent tasks without recharging. Making decisions drains our willpower battery.&lt;/p>
&lt;p>The phenomena of decision fatigue and ego depletion have been demonstrated in a number of amusing &lt;a href="https://www.nytimes.com/2011/08/21/magazine/do-you-suffer-from-decision-fatigue.html">studies&lt;/a>. A striking example is the study on Israeli judges, demonstrating the so-called &lt;a href="https://en.wikipedia.org/wiki/Hungry_judge_effect">hungry judge effect&lt;/a>.&lt;/p>
&lt;p>Computer science offers another way to think about decision fatigue. While a psychologist might speak of ego depletion, a computer scientist might speak of computational complexity. Here&amp;rsquo;s from &lt;em>Algorithms to Live By&lt;/em> by Brian Christian and Tom Griffiths:&lt;/p>
&lt;blockquote>
&lt;p>&amp;ldquo;One of the implicit principles of computer science, as odd as it may
sound, is that computation is bad: the underlying directive of any good
algorithm is to minimize the labor of thought. When we interact with other
people, we present them with computational problems—not just explicit
requests and demands, but implicit challenges such as interpreting our
intentions, our beliefs, and our preferences. It stands to reason, therefore,
that a computational understanding of such problems casts light on the
nature of human interaction.&amp;rdquo;&lt;/p>
&lt;/blockquote>
&lt;p>Further, the authors go on to advocate for what they call &amp;ldquo;computational kindness&amp;rdquo;. When interacting with people, we should try sharing the computational cost of making decisions. If you prefer a particular restaurant, don&amp;rsquo;t hesitate to indicate your preference. Although you&amp;rsquo;re trying to be polite by saying you&amp;rsquo;re indifferent, the other person might be relieved to hear you&amp;rsquo;re craving a particular kind of food.&lt;/p>
&lt;h3 id="iii">III. &lt;a href="#iii" class="hash">#&lt;/a>&lt;/h3>
&lt;p>The first step in dealing with decision fatigue is to notice it (as with so much else related to mental well being, I guess). Then one is in a better position to either make a quick, haphazard decision or recharging one&amp;rsquo;s willpower battery, so one can make a better decision later. This is easier said than done, though.&lt;/p>
&lt;p>For example, I recently planned a trip to London, booking flights, finding accommodation, coordinating with others, etc. Because I was excited about the trip, I didn&amp;rsquo;t notice when decision fatigue kicked in. If I&amp;rsquo;d known that it was decision fatigue, I wouldn&amp;rsquo;t have planned the trip in as much detail.&lt;/p>
&lt;p>In practise, noticing decision fatigue is hard. One can always practise &lt;a href="https://www.lesswrong.com/posts/GLPaZamxqkx7XJbXv/the-skill-of-noticing-emotions">the skill of noticing&lt;/a>, although it takes a lot of time. Another strategy would be trying to identify situations involving many choices and recalling how it felt. For instance, typical triggers of decision fatigue might be shopping, planning or coordinating with others. Analysis paralysis or a feeling of frustration might be good proxies for decision fatigue.&lt;/p>
&lt;p>But one can also take preventative measures. We can cut out a surprising number of choices from our everyday lives by coming up with default actions for various situations. For example, I don&amp;rsquo;t want to only wear black turtlenecks, but I do have a default outfit - a uniform of sorts. In Google Calendar, I also have a calendar describing my ideal week, so I don&amp;rsquo;t have to decide which lectures to attend on a day-to-day basis. Or at the supermarket: pick the cheapest toothpaste.&lt;/p>
&lt;p>Insignificant non-recurring choices can be batched, so one doesn&amp;rsquo;t go into choosing mode too often. One can even gamify the experience of making choices. For instance, I could have challenged myself to plan the trip within a given time frame.&lt;/p>
&lt;h3 id="iv">IV. &lt;a href="#iv" class="hash">#&lt;/a>&lt;/h3>
&lt;p>One of the best things about being an adult is that you get to do whatever you want. Getting to choose is a privilege, after all. It&amp;rsquo;s a privilege we should use.&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>If you speak Swedish, I recommend listening to Amanda Ginsburg&amp;rsquo;s &lt;a href="https://open.spotify.com/track/2ESWovaZgVi3BMloGgukKw?si=ca868beba3e24129">I de många valens land&lt;/a>.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/on-decision-fatigue/</guid><pubDate>Sun, 04 May 2025 00:00:00 +0000</pubDate></item><item><title>On good conversations</title><link>https://isabeldahlgren.github.io/on-good-conversations/</link><description>&lt;h3 id="i">I. &lt;a href="#i" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Some studies suggest we spend 80-90% of our waking hours talking with others. For a normal person, that&amp;rsquo;s at least 12 hours (!) per day. We spend most of our working hours talking to others, whether it be in meetings, lectures or exercise classes. When we&amp;rsquo;re not working, friends, roommates or partners might be around. The hours add up.&lt;/p>
&lt;p>Why do we spend this much time talking with others? Above all, interesting conversations are among the greatest pleasures in life. There&amp;rsquo;s nothing like a good conversation over a good meal. But talking with others can also be very productive. For example, talking with people smarter than you is a phenomenal way of learning. It&amp;rsquo;s like you&amp;rsquo;re downloading part of someone else&amp;rsquo;s worldview into your own head. Of course, there are other ways of achieving the same thing. However, in my opinion, one-on-one discussions come the closest to a loss-free download.&lt;/p>
&lt;p>For this reason, it&amp;rsquo;s natural to ask how we can have better conversations. Similar questions are often raised in the context of dating (&amp;ldquo;What are good topics to talk about on a date?&amp;rdquo;) or business (&amp;ldquo;How can we have more productive meetings?&amp;rdquo;). However, this question seems relevant to everyone.&lt;/p>
&lt;h3 id="ii">II. &lt;a href="#ii" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Several classic Russian novels go something like this: some characters are sitting on a train, they begin chatting and one thing leads to the next. That leads to an 800-page novel. The conversations they have are fascinating, although a bit exaggerated.&lt;/p>
&lt;p>I often have good conversations when I&amp;rsquo;m traveling. It&amp;rsquo;s as if me and my travel companions have this tacit agreement: &amp;ldquo;Okay, we&amp;rsquo;re all somewhat bored. We can&amp;rsquo;t do small talk for another five hours, so let&amp;rsquo;s talk. For real.&amp;rdquo; For this reason, people often end up being more open with one another.&lt;/p>
&lt;p>Moreover, simply being in a different environment can help. If you and a colleague are on a business trip, you&amp;rsquo;re less likely to ask the usual questions about the weather or life back home.&lt;/p>
&lt;p>Takeaway: the norms of the conversation matters a lot. If everyone implicitly agrees on the purpose of the discussion, things become a lot easier. Sometimes, there&amp;rsquo;s no need to say it out loud. But it can definitely help. For example, it might be better starting the phone call by saying &amp;ldquo;Hi, I need advice about you-know-who.&amp;rdquo; Or you might agree on a set of norms for the conversation. For instance, I recently learned that some people in rationalist circles will put a five minute timer whenever someone brings up philosophy of mind, a topic which often leads nowhere.&lt;/p>
&lt;h3 id="iii">III. &lt;a href="#iii" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Many scientists, artists and writers would have something like an intellectual spouse - a close collaborator with whom they&amp;rsquo;d discuss ideas. Famous examples include Watson and Crick, Lennon and McCartney or Sartre and de Beauvoir.&lt;/p>
&lt;p>Discussing is a skill. Doing it with the same people, at the same time and at the same place is a bit like practising regularly. Once you know each other sufficiently well, there&amp;rsquo;s no need to agree on conventions again. You just pick up from where you left off.&lt;/p>
&lt;h3 id="iv">IV. &lt;a href="#iv" class="hash">#&lt;/a>&lt;/h3>
&lt;p>The Arabic proverb goes &amp;ldquo;Speech is silver, silence is golden.&amp;rdquo; But in order for there to be silence, you need to be in a space with little background noise. Here comes an obvious point: having good discussions in noisy environments is much harder. I feel like this point tends to get neglected in practise, so I find it worth emphasising. Most restaurants, cafés or co-working spaces can be quite rowdy. Instead, consider going for a walk. Apparently Steve Jobs loved going for walking meetings, perhaps for this reason.&lt;/p>
&lt;h3 id="v">V. &lt;a href="#v" class="hash">#&lt;/a>&lt;/h3>
&lt;p>From an information-theoretic perspective, good conversations are nothing short of miracles. You&amp;rsquo;ll have several people communicating complex ideas at an astonishing speed. There are no mind-reading devices involved, just words. In my own experience, having a good conversation largely comes down to synchronisation.&lt;/p>
&lt;p>As a quick side note, there&amp;rsquo;s a reason I&amp;rsquo;m using the word &amp;ldquo;synchronisation&amp;rdquo; rather than &amp;ldquo;communication&amp;rdquo;. &amp;ldquo;Communication&amp;rdquo; can also be interpreted as a one-sided process, where one party shares information with another group. But in a good discussion, information usually flows both ways.&lt;/p>
&lt;p>One way of synchronising better is asking more questions. Not only do questions help eliminating confusion, but they also make the conversation much more enjoyable. Depending on the situation, asking questions might also be necessary for everyone to feel included. Moreover, as pointed out by Ben Kuhn &lt;a href="https://www.benkuhn.net/listen/">here&lt;/a>, asking questions puts you in a better position to help whoever you&amp;rsquo;re talking to. Without any background information, you&amp;rsquo;re likely to give bad advice. And you can only acquire background information by asking questions.&lt;/p>
&lt;p>Another useful technique is paraphrasing. The goal here isn&amp;rsquo;t to sound smart, but to synchronise with your conversation partner. In practise, this means I&amp;rsquo;ll try rephrasing the other person&amp;rsquo;s idea and asking if it&amp;rsquo;s accurate.&lt;/p>
&lt;h3 id="vi">VI. &lt;a href="#vi" class="hash">#&lt;/a>&lt;/h3>
&lt;p>It&amp;rsquo;s not just about being in sync, though. A good conversation also needs to be heading in an interesting direction. If you all know you&amp;rsquo;re heading towards a dead end or that you&amp;rsquo;re going in circles, then it might be worth pointing this out (kindly, of course). Chances are people will agree and be glad someone noticed. It&amp;rsquo;s a bit like when the host declares the party is over at 2 am, and you&amp;rsquo;re grateful for someone forcing you to go home and get some sleep. Another benefit of moderating the discussion is that you simultaneously encourage the group to &lt;a href="https://en.wikipedia.org/wiki/Metacognition">metacogitate&lt;/a>. Similarly, if you feel uncomfortable with a particular subject, it might be a good idea to say you&amp;rsquo;d prefer talk about something else, rather than being passive aggressive.&lt;/p>
&lt;h3 id="vii">VII. &lt;a href="#vii" class="hash">#&lt;/a>&lt;/h3>
&lt;p>How many of our 12 hours of conversation are of high quality? Quite few, I suspect. And this completely OK. Having good conversations can be draining too. It&amp;rsquo;s a form of flow. Even Mihaly Csikszentmihalyi, the psychologist who coined the term &amp;ldquo;flow&amp;rdquo;, remarked that we cannot spend all our lives flowing. So there&amp;rsquo;s no need to cut all small talk.&lt;/p>
&lt;p>But some conversations matter more than others. In fact, certain conversations can be life-changing. They make life more enjoyable, generate creative and scientific insights and connect us to others. What if we could improve the quality of these conversations ever so slightly?&lt;/p></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/on-good-conversations/</guid><pubDate>Sun, 27 Apr 2025 00:00:00 +0000</pubDate></item><item><title>Building time machines</title><link>https://isabeldahlgren.github.io/building-time-machines/</link><description>&lt;p>There&amp;rsquo;s a folder on my computer called &amp;ldquo;Time machine&amp;rdquo;, where I collect files on today&amp;rsquo;s me. It&amp;rsquo;s not exactly an FBI record on myself, but rather like a digital self portrait. After doing this over a longer period of time, I&amp;rsquo;ll share the folder with future versions of people I care about.&lt;/p>
&lt;h3 id="i">I. &lt;a href="#i" class="hash">#&lt;/a>&lt;/h3>
&lt;p>I like poring over old photos. I think it&amp;rsquo;s something I&amp;rsquo;ve picked up from my mom. She&amp;rsquo;s something of a camera terrorist &amp;ndash; she&amp;rsquo;ll always pull up the camera and have everyone else pose. As a teenager, I used to find this pretty embarrassing. But now I&amp;rsquo;m glad she ignored my complaints.&lt;/p>
&lt;p>A key theme in the book &lt;em>Thinking, Fast and Slow&lt;/em> is the distinction between the experiencing and remembering self. The experiencing self is the part feeling pleasure and pain, while the remembering self reflects on past experiences. While we&amp;rsquo;re often told to carpe diem more, we&amp;rsquo;re rarely told to try pleasing our remembering selves. But looking back on memories can be a huge source of joy.&lt;/p>
&lt;p>In &lt;em>A little book about happiness&lt;/em>, Michael Dahlén, professor of well-being at the Stockholm School of Economics, describes a fascinating experiment in which participants were asked to estimate their well-being at three points in time: past, present and future. On average, the graph looked like a slanted &amp;ldquo;V&amp;rdquo;, the lowest point being the present. Perhaps looking back can make you more appreciative of what you have.&lt;/p>
&lt;p>If that all sounds a bit woo woo, that&amp;rsquo;s OK. If you want, you can also use your time machine to spot cognitive biases. For instance, I have a list of things I&amp;rsquo;ve changed my mind about. As highlighted in &lt;em>Stumbling on Happiness&lt;/em> (bad title, good book), we consistently fail to appreciate just how much we change. I also have a document with my worries. When revisiting past entries, I notice a tendency of taking things too seriously.&lt;/p>
&lt;p>The end product, your time machine, is something you can share with others. In due time, I&amp;rsquo;ll share mine with the people who matter the most to me. For instance, I&amp;rsquo;d like to share my time machine with family members, long-time friends and my future partner. And in the very distant future, I might show it to my teenage kids, allowing them to get to know their mother better.&lt;/p>
&lt;h3 id="ii">II. &lt;a href="#ii" class="hash">#&lt;/a>&lt;/h3>
&lt;p>There&amp;rsquo;s a lot of latitude in how to build a time machine. My journal makes up a big portion of my time machine. But there are also other files in there. One of the benefits of the digital age is that we can store huge amounts of data very easily, so you may as well be experimental.&lt;/p>
&lt;p>In secondary school, a friend of mine started a trend of creating monthly playlists on Spotify&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>. I was quick in hopping on the trend, and I&amp;rsquo;m very grateful I did. Today I have a complete record of how my music taste has evolved since August 2015&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>. But I&amp;rsquo;m far from the only one doing this kind of thing. For example, Luke Muehlhauser publishes a list of music recommendations on a quarterly basis &lt;a href="https://lukemuehlhauser.com/">on his blog&lt;/a>.&lt;/p>
&lt;p>I&amp;rsquo;ve also kept a reading diary since 2016. It must have started out as a vanity project. Having read a boring classic, I assume I wanted to remember that I&amp;rsquo;d finished it. It&amp;rsquo;s amusing seeing how my preferences have changed over time. My reading log is just a long markdown document on my computer&lt;sup id="fnref:3">&lt;a href="#fn:3" class="footnote-ref" role="doc-noteref">3&lt;/a>&lt;/sup>, but you could also use a platform like &lt;a href="https://thestorygraph.com/">StoryGraph&lt;/a>. As of a couple of months ago, I&amp;rsquo;ve also been saving blog posts that particularly resonated with me. However, you could go beyond music and reading lists. Luke Muehlhauser also compiles lists with video games. If you&amp;rsquo;re a big podcast listener, take note of which podcasts you particularly enjoyed in a given month.&lt;/p>
&lt;p>This is all a lot of work, especially if you consume a lot of content. Be selective, but be consistent. By tracking your media diet over the course of years, you&amp;rsquo;ll learn how you&amp;rsquo;ve changed and gain more self-knowledge. You are what you eat, right?&lt;/p>
&lt;p>Other things could go into your time machine too. You can get to know someone else in a surprisingly short amount of time by asking the right questions. In fact, there are various compilations of questions meant to foster closer relationships, the most famous one being the &lt;a href="https://36questionsinlove.com/">36 questions to fall in love&lt;/a>. You could grab your favourite questions from that list and answer them in writing every year, say.&lt;/p>
&lt;p>I have two-three documents of this genre, but I&amp;rsquo;m planning on adding more. As mentioned above, there&amp;rsquo;s the list of things I was wrong about, as well as the list of things preoccupying me. Following the example of &lt;a href="https://nabeelqu.co/principles">Nabeel Qureshi&lt;/a>, I also have a &amp;ldquo;principles&amp;rdquo; document.&lt;/p>
&lt;h3 id="iii">III. &lt;a href="#iii" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Humans have always been building their own time machines in the form of literature and art. But a time machine can be just about anything. Building one doesn&amp;rsquo;t require artistic talent, just consistency. Nowadays, you can build a digital time machine with minimal effort. Since files don&amp;rsquo;t take up any physical space, there&amp;rsquo;s no need to clutter your house with keepsakes.&lt;/p>
&lt;p>I was recently given a necklace which belonged to my great grandmother. It&amp;rsquo;s a beautiful piece of jewellery. But it doesn&amp;rsquo;t tell me who Ellen was. I hope I&amp;rsquo;ll be able to gift future generations something more.&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>Thank you Nina Ivarsson.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>The playlists from 2015-2017 are on my parents&amp;rsquo; account, but the more recent playlists can be found &lt;a href="https://open.spotify.com/user/q2r0ontmw48z1mc7u2elyho2u?si=1aed8800d74d41ce">here&lt;/a>.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:3">
&lt;p>For excerpts of my reading log, see the &lt;a href="https://isabeldahlgren.github.io/library/">library&lt;/a>.&amp;#160;&lt;a href="#fnref:3" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/building-time-machines/</guid><pubDate>Sun, 20 Apr 2025 00:00:00 +0000</pubDate></item><item><title>How I use LLMs</title><link>https://isabeldahlgren.github.io/how-i-use-llms/</link><description>&lt;p>&lt;em>Related: &lt;a href="https://isabeldahlgren.github.io/will-ai-replace-mathematicians/">Will AI replace mathematicians?&lt;/a>&lt;/em>&lt;/p>
&lt;p>About 40% of students in the ETH main library always have a Chat-GPT tab open. I soon decided to try using LLMs for my own studies (because the wisdom of the crowd is a real thing). I haven&amp;rsquo;t figured out how to best use LLMs for my coursework, but I&amp;rsquo;m experimenting with various approaches.&lt;/p>
&lt;h3 id="getting-unstuck">Getting unstuck &lt;a href="#getting-unstuck" class="hash">#&lt;/a>&lt;/h3>
&lt;p>For me, a big time sink is getting stuck on details. I usually go over the lecture notes after lectures, trying to work out the steps I didn&amp;rsquo;t follow with pen and paper. Ideally, I&amp;rsquo;d do this sitting next to a friend - it&amp;rsquo;s very convenient having someone whom to ask nearby. As Nate Soares put it &lt;a href="https://www.lesswrong.com/posts/w5F4w8tNZc6LcBKRP/on-learning-difficult-things">here&lt;/a>:&lt;/p>
&lt;blockquote>
&lt;p>&amp;ldquo;The problem is, most of the time that I get stuck, I get stuck on something incredibly stupid. I&amp;rsquo;ve either misread something somewhere or misremembered a concept from earlier in the book. Usually, someone looking over my shoulder could correct me in ten seconds with three words. &amp;lsquo;Dude. Disjunction. &lt;em>Dis&lt;/em>junction.&amp;rsquo;&amp;rdquo;&lt;/p>
&lt;/blockquote>
&lt;p>But studying with a friend isn&amp;rsquo;t always possible. If there&amp;rsquo;s a point of confusion I cannot resolve myself after making a reasonable effort, asking an LLM might help. Formulating a good question is always an instructive exercise. Moreover, nine times out of ten, the response is useful. Even if the LLM doesn&amp;rsquo;t entirely solve my problem, it might reference relevant concepts or serve as a sanity check. Sometimes I&amp;rsquo;ll learn that my approach was completely mistaken - and that&amp;rsquo;s certainly useful too!&lt;/p>
&lt;h3 id="hints">Hints &lt;a href="#hints" class="hash">#&lt;/a>&lt;/h3>
&lt;p>In the classic &lt;em>How to Solve It&lt;/em>, George Pólya famously noted that mathematics isn&amp;rsquo;t a spectator sport. By generating hints, LLMs can be an aid in the problem-solving process too. Just have an honest attempt at the problem before consulting an LLM, and tell the LLM to not give away the entire solution. But it&amp;rsquo;s important to notice when one is stuck and ask for help. For someone like me who usually waits too long before taking hints, the ease of generating hints with Chat-GPT makes a huge difference.&lt;/p>
&lt;h3 id="the-big-picture">The big picture &lt;a href="#the-big-picture" class="hash">#&lt;/a>&lt;/h3>
&lt;p>LLMs are terrific at explaining high-level ideas. I&amp;rsquo;m a big fan of learning concepts &amp;ldquo;top-down&amp;rdquo;, starting with the big picture before getting into the details. While having more context doesn&amp;rsquo;t necessarily mean the material sticks better, I find this approach much more enjoyable. I usually ask Chat-GPT to give me the key idea before I look into the details. Apart from this, I regularly prompt Chat-GPT to give me the intuition for something or to motivate concepts. If a lecturer is pressed on time, they&amp;rsquo;ll cut the motivation bit, rather than leaving out a definition or theorem statement. For this reason, an AI-generated introduction can complement the lectured material.&lt;/p>
&lt;p>Some of my favourite prompts include:&lt;/p>
&lt;ul>
&lt;li>&amp;ldquo;Why do we care about X?&amp;rdquo;&lt;/li>
&lt;li>&amp;ldquo;What is the main idea behind the proof of X?&amp;rdquo;&lt;/li>
&lt;li>&amp;ldquo;What&amp;rsquo;s the intuition for this definition?&amp;rdquo;&lt;/li>
&lt;/ul>
&lt;p>I also find it helpful trying to explain a concept in my own words and asking Chat-GPT to elaborate or check if my explanation is accurate.&lt;/p>
&lt;h3 id="caveats">Caveats &lt;a href="#caveats" class="hash">#&lt;/a>&lt;/h3>
&lt;p>All this said, I&amp;rsquo;d like to add a few caveats.&lt;/p>
&lt;p>A friend or teaching assistant could help with the above tasks better than today&amp;rsquo;s LLMs. They know about your mathematical background and what conventions you&amp;rsquo;re using. When I interact with chatbots, explaining conventions and providing context adds a lot of overhead. However, this problem seems fixable. Many AI labs are already working on ways to provide more personalised responses by having the chatbot remember information across chat sessions. Students could e.g. upload lecture notes and indicate which parts they&amp;rsquo;d covered.&lt;/p>
&lt;p>AI systems also make mistakes. But this isn&amp;rsquo;t that big of an issue. Most mistakes are easy to spot, especially if you ask the AI to explain steps that seem fishy. If you point out what went wrong, it will modify the argument. With human guidance, AI systems can get quite far. Also, LLMs don&amp;rsquo;t need to get all the details right in order to be useful. As Terry Tao noted in &lt;a href="https://unlocked.microsoft.com/ai-anthology/terence-tao/">this blog post&lt;/a>:&lt;/p>
&lt;blockquote>
&lt;p>&amp;ldquo;Strangely, even nonsensical LLM-generated math often references relevant concepts. With effort, human experts can modify ideas that do not work as presented into a correct and original argument. The 2023-level AI can already generate suggestive hints and promising leads to a working mathematician and participate actively in the decision-making process.&amp;rdquo;&lt;/p>
&lt;/blockquote>
&lt;p>Of course, if you want to be dead certain the AI-generated argument is correct, have the AI output a formal proof in Lean.&lt;/p>
&lt;p>Another fear of mine, perhaps ungrounded, is basically that LLMs will make us lazy. Learning requires a certain amount of effort, while writing a good LLM prompt is relatively easy. If we use LLMs more and more, will we remove the friction needed for learning? I don&amp;rsquo;t know whether this fear is valid or if it&amp;rsquo;s just an instance of &amp;ldquo;tech panic&amp;rdquo;. But as long as we set boundaries for our LLM usage, we need not spoil the learning experience.&lt;/p>
&lt;h3 id="where-does-this-leave-us">Where does this leave us? &lt;a href="#where-does-this-leave-us" class="hash">#&lt;/a>&lt;/h3>
&lt;p>It seems, then, as if we could overcome the problems I&amp;rsquo;ve encountered when tinkering with AIs. My experience with using LLMs as part of my studies has been positive, so I&amp;rsquo;ll continue exploring ways in which AI can assist. The one thing that LLMs can&amp;rsquo;t provide, however, is the social aspect of doing maths. Solving problems with others is infinitely more fun than coming up with LLM prompts. If anything, I think the above use cases highlight the importance of doing maths together with others.&lt;/p></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/how-i-use-llms/</guid><pubDate>Sun, 13 Apr 2025 00:00:00 +0000</pubDate></item><item><title>Will AI replace mathematicians?</title><link>https://isabeldahlgren.github.io/will-ai-replace-mathematicians/</link><description>&lt;p>I used to think of maths as the one thing LLMs couldn&amp;rsquo;t do well. While GPT 3.0 would excel in language-based tasks, it would struggle to solve elementary maths problems. But a lot has happened since then. Over the last year, I&amp;rsquo;ve come to take the idea of using AI as an aid for doing maths more seriously. In fact, I now believe LLMs might prove new theorems with no human guidance within just 3 years.&lt;/p>
&lt;p>First, there was the silver medal at International Mathematics Olympiad (IMO). Although AlphaProof and AlphaGeometry 2 took well over 9h, the time given contestants are given, it&amp;rsquo;s quite a feat: IMO problems require an element of creativity. Not only that - it was able to formalise its solution in Lean. Lean is still a fairly new programming language, and there isn&amp;rsquo;t nearly as much training data as for other programming languages. The work of the DeepMind team shows two things: firstly, lots of clever people are trying to build AI systems for doing maths; secondly, apparently their current approach works pretty well. However, as of April 2025, you don&amp;rsquo;t need an AI specifically trained to do maths in order to solve tricky problems: the new o3 and o4 mini models achieve &lt;a href="https://openai.com/index/introducing-o3-and-o4-mini">impressive performance&lt;/a> in the American Invitational Mathematics Examination (AIME). What if we use RL to build AI systems specialised in more advanced topics? Perhaps these AIs might prove new theorems. Even if they don&amp;rsquo;t, they might provide researchers with insights.&lt;/p>
&lt;p>Next, several top-gun mathematicians think AI might transform maths research in the next decade. Most notably, Terence Tao has highlighted ways in which machines can help human mathematicians. Here&amp;rsquo;s from a &lt;a href="https://unlocked.microsoft.com/ai-anthology/terence-tao">blog post&lt;/a> of Tao:&lt;/p>
&lt;blockquote>
&lt;p>&amp;ldquo;I could feed GPT-4 the first few PDF pages of a recent math preprint and get it to generate a half-dozen intelligent questions that an expert attending a talk on the preprint could ask. I plan to use variants of such prompts to prepare my future presentations or to begin reading a technically complex paper.&amp;rdquo;&lt;/p>
&lt;/blockquote>
&lt;p>In &lt;a href="https://epoch.ai/frontiermath/expert-perspectives">this report&lt;/a> from Epoch AI, Richard Borcherds seems equally optimistic about the possibilities of using, predicting that AI might even surpass human mathematicians within 10 years. Overall, I think we&amp;rsquo;re starting to see a cultural shift in the maths community. People are recognising that AI is a huge deal.&lt;/p>
&lt;p>Finally, I observed that 40% of students in the library seem to have a Chat-GPT tab open at all times. These are students doing STEM subjects, such as maths and physics. This seems like an important data point (and this isn&amp;rsquo;t just because I&amp;rsquo;m giving more weight to first-hand experience). LLMs are transforming the way students learn, and these are the people who will go on to do research in a couple of years. Chances are we won&amp;rsquo;t stop using LLMs just because the material becomes more niche. Even if you receive a hallucinatory answer, the LLM might reference a relevant concept, helping you get unstuck. I&amp;rsquo;m using Chat-GPT for my own studies, and I&amp;rsquo;m impressed by its explaining abilities. Basically, it can easily handle any concept you&amp;rsquo;ll come across in a master degree in mathematics. I&amp;rsquo;ve also prompted Chat-GPT to distill the key ideas from more recent papers and found its responses very helpful.&lt;/p>
&lt;p>All in all, I&amp;rsquo;ve come to shorten my AI timelines quite a bit. But rather than thinking &amp;ldquo;Will I ever find a job?&amp;rdquo;, I find myself thinking &amp;ldquo;What a time to be alive!&amp;rdquo;&lt;/p></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/will-ai-replace-mathematicians/</guid><pubDate>Sun, 06 Apr 2025 00:00:00 +0000</pubDate></item><item><title>"Just switch it off"</title><link>https://isabeldahlgren.github.io/just-switch-it-off/</link><description>&lt;p>If we develop a rogue AI, couldn&amp;rsquo;t we just switch it off? This is the obvious objection to the idea that AI could be dangerous. Here &amp;ldquo;switching off&amp;rdquo; would mean deleting a model&amp;rsquo;s weights, so it can&amp;rsquo;t be deployed. Deleting files is easy enough, so what might prevent us from switching off a misaligned AI?&lt;/p>
&lt;p>First, the users need to realise that the model is dangerous. This can be challenging, especially for more advanced models. The key premise here is that AIs will try preserving themselves. That is, they don&amp;rsquo;t &amp;ldquo;want&amp;rdquo; to be turned off - this would prevent them from pursuing their goals. If an AI knows that it will be turned off if it&amp;rsquo;s misaligned, it might try appearing safe during training. This is commonly known as &lt;a href="https://arxiv.org/pdf/2412.14093">&amp;ldquo;alignment faking&amp;rdquo;&lt;/a>. Although this sounds a bit far-fetched, this phenomenon has been observed in some models.&lt;/p>
&lt;p>Second, and perhaps more worryingly, is the question of whether we as a society want to &amp;ldquo;press delete&amp;rdquo;. Turning off sandboxed AI - AI developed in a secure training environment - isn&amp;rsquo;t a big deal. The negative consequences, if any, are limited to the few people who can access the model. But a leading AI company has strong incentives against withdrawing potentially unsafe models from the market. Doing this would mean less profit, bad PR and giving away market share to competitors. Besides these economical considerations, there&amp;rsquo;s the geopolitical aspect. As highlighted in the &lt;a href="https://ai-2027.com/">AI 2027 report&lt;/a>, the fear of falling behind in the AI arms race might lead us to deploy even misaligned AI. Even if the people behind the AI wanted to switch off their models because of safety considerations, what would the general public think? Most people would probably be reluctant to stop their favourite LLMs, despite poor performance on safety benchmarks.&lt;/p>
&lt;p>Switching off an AI isn&amp;rsquo;t just a matter of deleting files. It requires us to detect unsafe behaviour, a task that&amp;rsquo;s likely to become more difficult with more capable models. Then there&amp;rsquo;s the human factor. Asking that AI companies delete models showing signs of misalignment is asking for a lot. In the future, turning off an AI in a broader sense would require turning off parts of our society.&lt;/p></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/just-switch-it-off/</guid><pubDate>Sun, 30 Mar 2025 00:00:00 +0000</pubDate></item><item><title>From a YouTube alumna</title><link>https://isabeldahlgren.github.io/from-a-youtube-alumna/</link><description>&lt;p>Random people on the Internet have played a huge role in my education. I&amp;rsquo;m not just referring to my coursework at university, but also to &amp;ldquo;Bildung&amp;rdquo; more generally. I&amp;rsquo;ve learned a ton by browsing StackOverflow threads and reading Medium articles. However, I&amp;rsquo;ve probably learned the most from watching YouTube.&lt;/p>
&lt;h3 id="learning-by-watching">Learning by watching &lt;a href="#learning-by-watching" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Above all, there&amp;rsquo;s some really high-quality content out there. Nowadays there are full-time YouTubers, working on creating professional, meticulously edited videos. And some channels, such as &lt;a href="https://www.youtube.com/channel/UCsXVk37bltHxD1rDPwtNM8Q">Kurzgesagt&lt;/a>, are even ran by entire teams of illustrators and script-writers. Moreover, because anyone can record themselves and upload it to YouTube, we have world-class experts sharing their knowledge in YouTube lectures&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>. This means there are some truly remarkable YouTube videos. For example, here are comments from some &lt;a href="https://www.youtube.com/@3blue1brown">3Blue1Brown&lt;/a> videos:&lt;/p>
&lt;blockquote>
&lt;p>&amp;ldquo;I dropped out in 10th grade 25 years ago and your videos have inspired me to go back to school.&amp;rdquo;&lt;/p>
&lt;/blockquote>
&lt;blockquote>
&lt;p>&amp;ldquo;You sir truly deserve an honorary doctorate - just for this video. Your impact to generations of confused engineering and math students will forever ripple through our society.&amp;rdquo;&lt;/p>
&lt;/blockquote>
&lt;blockquote>
&lt;p>&amp;ldquo;I have a master&amp;rsquo;s degree in mechanical engineering and I&amp;rsquo;m starting to think I should redo my whole education from ground up searching for this kind of intuitive knowledge. It&amp;rsquo;s absurd that I find out explanations which are as intuitive as this one so late in my life. I&amp;rsquo;m blown away completely! I mean how many bits of information have we stumbled upon during our formal education failing to see how they elegantly relate to each other and form a bigger picture&amp;hellip;oh my!&amp;rdquo;&lt;/p>
&lt;/blockquote>
&lt;p>Moreover, videos tend to be more attention-grabbing than articles. Although most people are unable to read while cooking or brushing their teeth, they can watch videos. So getting started learning has never required less willpower: just search &amp;ldquo;Introduction to&amp;hellip;&amp;rdquo; on YouTube.&lt;/p>
&lt;h3 id="learning-to-watch">Learning to watch &lt;a href="#learning-to-watch" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Of course, YouTube isn&amp;rsquo;t designed to be a learning platform. But there ways of optimising for a better learning experience.&lt;/p>
&lt;p>The first step is to recognise that YouTube tries maximising user retention. This is a feature, not a bug. It means we can design our YouTube interface such that we end up binge watching informative videos about topics we care about. Here are some ways of achieving this:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Train your algorithm&lt;/strong>: in your YouTube feed, take 2 seconds to press &amp;ldquo;Not interested&amp;rdquo; whenever something irrelevant pops up. It pays off - I find the YouTube algorithm to be surprisingly sensitive to my feedback.&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>&lt;/li>
&lt;li>&lt;strong>Keep separate accounts&lt;/strong>: on a similar note, I have two Google accounts: my main account, and my secondary account. I&amp;rsquo;ll use my main account for watching &amp;ldquo;useful&amp;rdquo; content, while I&amp;rsquo;ll log onto my secondary account for, well, everything else.&lt;/li>
&lt;li>&lt;strong>Block channels&lt;/strong>: use &lt;a href="https://getcoldturkey.com/support/how-to/allow-youtube-channel/">ColdTurkey&lt;/a> to block certain YouTube channels.&lt;/li>
&lt;/ul>
&lt;p>The second step is to recognise the limits of just watching videos. There&amp;rsquo;s a reason we don&amp;rsquo;t abandon more traditional media altogether. When reading a book, I find it much easier recognising when I&amp;rsquo;m confused. However, after finishing a video, I sometimes find myself completely lost and unable to tell where I stopped following. Moreover, whenever I have a physical textbook, I&amp;rsquo;ll often refer back to chapters I&amp;rsquo;ve finished, just to refresh my memory. Here are two partial fixes:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Media notes plugin&lt;/strong>: the &lt;a href="obsidian://show-plugin?id=media-notes">Media Notes&lt;/a> plugin for Obsidian is a real game-changer. It allows you to watch YouTube videos from inside Obsidian and take notes with timestamps. When doing this, I seem to engage more with the material. Taking physical notes while watching YouTube is a bit overkill, so this seems like a good compromise.&lt;/li>
&lt;li>&lt;strong>Rewatch your favourite videos&lt;/strong>: it&amp;rsquo;s easy ending up only consuming new content, just because the YouTube landing page is filled with new videos. But it&amp;rsquo;s worth saving your favourite videos to playlists and rewatching them later.&lt;/li>
&lt;/ul>
&lt;p>As I discussed above, there are some hacks for a better learning experience. However, YouTube could also design their platform differently. They could e.g. develop distraction-free mode, enabling the user to remove shorts, ads or sponsored content. From a technical perspective, it&amp;rsquo;s doable. But we could also go beyond ordinary videos. For example, Andy Matuschak and Michael Nielsen have explored ways of incorporating an element of spaced repetition in videos, making for a more interactive learning experience; see &lt;a href="https://numinous.productions/ttft/#mnemonic-video">here&lt;/a>. In general, I&amp;rsquo;m excited about integrating modern technology with education.&lt;/p>
&lt;p>All this said, I’d like to thank the strangers who have played - and continue to play - an important role in my education.&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>See e.g. &lt;a href="https://www.youtube.com/@AndrejKarpathy">Andrej Karpathy&amp;rsquo;s channel&lt;/a>.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>My friend Åke Lindblom first told me about this. He apparently has an insanely good algorithm.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/from-a-youtube-alumna/</guid><pubDate>Sun, 23 Mar 2025 00:00:00 +0000</pubDate></item><item><title>Look for opinions</title><link>https://isabeldahlgren.github.io/look-for-opinions/</link><description>&lt;p>Opinionated people can be really annoying. Wherever they go, they try convincing you of their ideas. If you have an opinionated uncle, the Christmas dinner might be ruined by a bitter argument. I&amp;rsquo;ve certainly had bad experiences with a dinner-table conversations turning into feuds. For this reason, I used to try having fewer opinions. I somehow assumed this meant being more open-minded and mature. Well, no.&lt;/p>
&lt;h3 id="welcoming-opinions">Welcoming opinions &lt;a href="#welcoming-opinions" class="hash">#&lt;/a>&lt;/h3>
&lt;p>Actually, there are plenty of benefits of actively trying to form more opinions. Even about topics you don&amp;rsquo;t know particularly well. If you learn with a view towards arguing, then you&amp;rsquo;ll pay closer attention to the material. I think this has to do with anchoring. If you pick a stance, even at random, you&amp;rsquo;ll be more emotionally invested. Holden Karnofsky summarised it neatly in &lt;a href="https://www.cold-takes.com/learning-by-writing/">Learning by Writing&lt;/a>:&lt;/p>
&lt;blockquote>
&lt;p>By doing this [trying to have a hypothesis and rearticulating it whenever it changes], I try to &lt;strong>continually focus my reading on the goal of forming a bottom-line view, rather than just “gathering information.”&lt;/strong> I think this makes my investigations more focused and directed, and the results easier to retain. I consider this approach to be &lt;strong>probably the single biggest difference-maker between &amp;ldquo;reading a ton about lots of things, but retaining little&amp;rdquo; and &amp;ldquo;efficiently developing a set of views on key topics and retaining the reasoning behind them.&amp;rdquo;&lt;/strong>&lt;/p>
&lt;/blockquote>
&lt;p>Moreover, chatting with people with strong opinions can also be fun. Say you&amp;rsquo;re at a cocktail party. Small talk can be quite tiring, at least after a couple of hours. In this situation, I&amp;rsquo;ll gladly talk to people trying to persuade me of their ideas. Or say you&amp;rsquo;re hosting friends for dinner, and a friend explains her take on a topic you all like.&lt;/p>
&lt;p>Back in school, we were encouraged to form more opinions. Teachers made us write argumentative essays about topics we hardly knew anything about. They know that most 14 year-olds don&amp;rsquo;t care the slightest about whether fathers should be given two additional weeks of paternity leave, or if the capital income tax should be raised by 1%. But I don&amp;rsquo;t think it was only meant as an exercise in communicating effectively. It felt as if teachers were saying &amp;ldquo;Go out there in the big wild world, and look for opinions!&amp;rdquo;&lt;/p>
&lt;h3 id="chasing-opinions">Chasing opinions &lt;a href="#chasing-opinions" class="hash">#&lt;/a>&lt;/h3>
&lt;p>So why doesn&amp;rsquo;t everyone have, like, a lot opinions?&lt;/p>
&lt;p>I think many people, whether they recognise it or not, resort to some kind of agnosticism for fear of being wrong. However, recognising you&amp;rsquo;re wrong just means you&amp;rsquo;re updating your beliefs. It&amp;rsquo;s not that big of a deal. Also, many people think they aren&amp;rsquo;t entitled to hold an opinion since they aren&amp;rsquo;t &amp;ldquo;qualified&amp;rdquo;. This is true for areas in which there&amp;rsquo;s a clear distinction between experts and non-experts. But I have a hunch that we sometimes use this as an excuse for not looking into certain issues. For example, as highlighted in &lt;a href="https://www.amazon.com/Superforecasting-Science-Prediction-Philip-Tetlock/dp/0804136718">Superforecasting&lt;/a>, normal people can form well-informed predictions on certain issues with a bit of practise. Anyway, I think there&amp;rsquo;s a middle ground here: if you don&amp;rsquo;t know all the technical details, just adjust your confidence levels.&lt;/p>
&lt;p>It&amp;rsquo;s worth emphasising that forming opinions can be very difficult and time-consuming. For example, predicting technological progress is notoriously hard. It requires you to do your homework, researching which factors influence scientific advancements and so on. This goes against the idea of opinions coming to us &amp;ldquo;naturally&amp;rdquo;, as if by chance. Yet, this isn&amp;rsquo;t a good reason not to actively seek out opinions. Sometimes, we have to make our minds up in order to take action.&lt;/p>
&lt;p>As long as one has some &amp;ldquo;epistemic etiquette&amp;rdquo; - being prepared to change your beliefs in the light of new evidence, and not taking everything so personally - having more opinions seems like a good thing. I&amp;rsquo;m currently trying to build the habit of always having a working hypothesis whenever I learn something new. Having more opinions makes you feel more like part of the world, rather than as a bystander.&lt;/p></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/look-for-opinions/</guid><pubDate>Sun, 16 Mar 2025 00:00:00 +0000</pubDate></item><item><title>Useful thought experiments</title><link>https://isabeldahlgren.github.io/useful-thought-experiments/</link><description>&lt;p>Philosophers love thought experiments. Thought experiments are hypothetical scenarios meant to tease out our intuitions about an argument or theory. For example, here&amp;rsquo;s a classic thought experiment, due to Robert Nozick:&lt;/p>
&lt;blockquote>
&lt;p>Suppose there were an experience machine that would give you any experience you desired. Superduper neuropsychologists could stimulate your brain so that you would think and feel you were writing a great novel, or making a friend, or reading an interesting book. All the time you would be floating in a tank, with electrodes attached to your brain. Should you plug into this machine for life, preprogramming your life&amp;rsquo;s experiences?&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>&lt;/p>
&lt;/blockquote>
&lt;p>Many thought experiments seem kind of cooked up, so it&amp;rsquo;s easy to believe thought experiments have no practical use. However, one of my key takeaways from &lt;em>The Scout Mindset&lt;/em> by Julia Galef was that thought experiments aren&amp;rsquo;t just diversion for people with too much spare time. In fact, she argues, certain thought experiments can help us think more clearly about decisions we face in everyday life&lt;sup id="fnref:2">&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref">2&lt;/a>&lt;/sup>. In this essay, I&amp;rsquo;ll go over three interesting thought experiments from her book, and then describe two personal faves.&lt;/p>
&lt;ul>
&lt;li>&lt;strong>The Outsider Test&lt;/strong>: This all comes down to having an outsider&amp;rsquo;s perspective, as if one were calling a friend. How would an outsider evaluate the situation? For example, say John Doe has been given two job offers: one at company X, the other at company Y. Company X will look better on his CV, but he&amp;rsquo;s unlikely to enjoy the day-to-day tasks. In contrast, the job at company Y, although not quite as prestigious, seems more fun. Here, an outsider may say something like &amp;ldquo;If prestige weren&amp;rsquo;t a consideration, which option would you pick?&amp;rdquo;&lt;/li>
&lt;li>&lt;strong>The Conformity Test&lt;/strong>: This is a good one. We&amp;rsquo;re often quick in adopting the beliefs of people we respect. And this is usually a good thing: life is to short to overthink everything, and we&amp;rsquo;ve got to form opinions somehow. (For fans of &lt;em>Thinking, Fast and Slow&lt;/em>, this is System 1 in action.) However, when it comes to more delicate subjects, this mental short-cut might fail. In the Conformity Test, Julia Galef asks you to imagine that people no longer hold your view. (To all contrarians out there, just imagine that the people in your community suddenly become just as everyone else.) There&amp;rsquo;s a particularly interesting spin-off here: what if one of the main proponents of your view, perhaps the person who helped shape your beliefs about the subject, would change their mind? I think the EA/rationalist community provides a good use case. What if Will MacAskill would say he was completely mistaken about longtermism, rejecting the idea altogether? Or if Eliezer Yudowsky would declare that AI after all isn&amp;rsquo;t that big of a threat?&lt;/li>
&lt;li>&lt;strong>The Status Quo Bias Test&lt;/strong>: The underlying idea here is that humans have a bias towards the status quo. If you were to start from scratch, would you actively choose your current situation? For example, imagine a medical student who realises halfway through second year of med school that medicine isn&amp;rsquo;t for her. Although she cannot imagine herself as a doctor, she&amp;rsquo;s still hesitant to switch subjects. Here the Status Quo Bias Test might come in handy.&lt;/li>
&lt;/ul>
&lt;p>After reading &lt;em>The Scout Mindset&lt;/em>, I soon realised that some of the advice I&amp;rsquo;ve received over the years can be rephrased as thought experiments. Here are two such thought experiments which I&amp;rsquo;ve found particularly useful:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Worst Case Scenario&lt;/strong>: This just involves asking oneself about the worst-case scenario. Is it really that bad? If yes, well, then you know. At least your fear isn&amp;rsquo;t some sort of vague, scary illusion. And if not, good!&lt;/li>
&lt;li>&lt;strong>Fast forward a decade&lt;/strong>: In ten years from now, which decision is one more likely to regret? Humans are famously bad at long-term planning. We&amp;rsquo;ll often fail to take the route of action with benefits in a distant future. (Hence climate change and the fact that most adults don&amp;rsquo;t get enough sleep.) So it might be a good idea doing some kind of Outsider test, where the outsider is one&amp;rsquo;s future self.&lt;/li>
&lt;/ul>
&lt;p>Some of the above thought experiments might sound familiar. Perhaps you&amp;rsquo;ve already used some of them yourself. After all, these seem like obvious tricks for seeing this for what they are. But perhaps it&amp;rsquo;s helpful having names for these tricks. It&amp;rsquo;s a bit like building a toolkit for better decision-making. I&amp;rsquo;ve applied something like Worst Case Scenario a bunch of times, but only after spending a couple of days of dwelling on the issue in a very unproductive way. Thinking in terms of thought experiments would have spared me a lot of headache.&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>Nozick, Robert, and Thomas Nagel. &lt;em>Anarchy, state, and utopia&lt;/em>. Vol. 5038. New York: Basic books, 1974.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;li id="fn:2">
&lt;p>Galef, Julia. &lt;em>The Scout Mindset: Why some people see things clearly and others don&amp;rsquo;t&lt;/em>. Penguin, 2021.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/useful-thought-experiments/</guid><pubDate>Sun, 02 Mar 2025 00:00:00 +0000</pubDate></item><item><title>Hunting dependencies</title><link>https://isabeldahlgren.github.io/hunting-dependencies/</link><description>&lt;p>One of the benefits of attending lectures is that lecturers tend to give some really good unsolicited advice every now and then. Last semester, a professor of ours digressed to talk about the importance of identifying the key ingredients of a result. He&amp;rsquo;d just concluded a rather long proof, and was about to clean the blackboard as he said that the result was elementary. See, you only need this one lemma from point-set topology (admittedly a bit niche, but easy to prove), and then the definition of the Fourier transform. Put that way, sure, maybe it&amp;rsquo;s elementary. Explicitly writing down the dependencies of an idea, he said, was a good exercise.&lt;/p>
&lt;h3 id="an-example">An example &lt;a href="#an-example" class="hash">#&lt;/a>&lt;/h3>
&lt;p>I tried doing this a couple of times for definitions, theorem statements and proofs. At first, it felt a bit silly. Once I finished writing down the main components of a result, it seemed trivial. I overdid it at first, writing down dependencies even for minor lemmas. But for more complicated theorems or involved definitions, it proved quite useful.&lt;/p>
&lt;p>Now for a concrete example. Suppose we&amp;rsquo;re trying to understand martingales. Martingales can be thought of as sequences of random variables representing fair games: if we&amp;rsquo;re given the value of the $n$:th random variable, we expect the value of the $(n+1)$:th random variable to stay the same. There&amp;rsquo;s no predictable up- or downward trend. Here&amp;rsquo;s the definition from &lt;a href="https://link.springer.com/book/10.1007/978-3-031-14205-5">Le Gall&lt;/a>:&lt;/p>
&lt;blockquote>
&lt;p>Let $(X\_n)\_{n \in \mathbb{Z}\_+}$ be an adapted, real-valued random process, such that $E(|X\_n|) &lt; \infty$ for every $n \in \mathbb{Z}\_+$. We say that the process $(X\_n)\_{n \in \mathbb{Z}\_+}$ is a martingale if, for every $n \in \mathbb{Z}\_+$, $E(X_{n+1}|\mathcal{F}_n) = X_n$.&lt;/p>
&lt;/blockquote>
&lt;p>First, we need some terminology related to random processes, understanding what&amp;rsquo;s meant by an &amp;ldquo;adapted process&amp;rdquo; and a &amp;ldquo;filtration&amp;rdquo;. Apart from that, we need a solid grasp of conditional expectations. The definition of a conditional expectation with respect to a $\sigma$-algebra, as well as the underlying intuition. And that&amp;rsquo;s about it.&lt;/p>
&lt;h3 id="should-i-bother">Should I bother? &lt;a href="#should-i-bother" class="hash">#&lt;/a>&lt;/h3>
&lt;p>If you want to learn something thoroughly, then yes. Tracking down dependencies makes you engage more with the material. Learning a concept isn&amp;rsquo;t just a matter of being able to regurgitate the contents of the lecture notes - it requires you to build your own mental model of what&amp;rsquo;s going on. As I see it, the purpose of exercises, quizzes or review questions is to make us think more carefully about a given topic. Otherwise, because humans (or at least maths students) are lazy, chances are we&amp;rsquo;ll go through the material too quickly. Tracking down dependencies is a bit like doing more problems, in that it prompts us to revisit the material.&lt;/p>
&lt;p>Another benefit of nailing down dependencies is that it makes the concept seem relatively simple. Some results can seem quite daunting at first. But working out the dependencies can make the concept seem deceptively simple - almost to the point where you&amp;rsquo;re struck with the curse of knowledge. Moreover, if I struggle to understand an idea but am clear about the dependencies, I know what to do: I&amp;rsquo;ll just read up on each of the topics involved. In this way, the dependencies translate into an a checklist for my learning process.&lt;/p>
&lt;p>Lastly, I found it satisfying seeing how different notions tied into one another. I also put it all into one mindmap which, apart from having a high aesthetic value, gave me the big picture of the subject.&lt;/p></description><author>ronalds.vilcins@gmail.com (Ronalds Vilcins)</author><guid>https://isabeldahlgren.github.io/hunting-dependencies/</guid><pubDate>Sun, 23 Feb 2025 00:00:00 +0000</pubDate></item></channel></rss>