Some things need de-vibing

learning, ai, rationality

I remember reading about one of the first human-humanoid marriages back in 2017. Unable to find a girlfriend, some Chinese guy built a robot and married it instead. I found it rather cringe1.

These days, it’s common for people to have intimate relations with their favourite AIs. While you cannot walk down the aisle with your chatbot, you can co-live much like husband and wife. Perhaps your chat history looks much like the day-to-day exchanges of a married couple.

Our attachment to our AIs shows in different ways. Debates about the best chatbot can become rather heated, for example. The idea of losing access to Claude Code for a week was sufficiently distressing for me to purchase one month of Claude Code to cover the gap between two sponsored Claude Code subscriptions2.

In some sense, the present situation – with people spending much of their waking time with AIs – should be more cringe than the human-humanoid marriage. The Chinese guy couldn’t talk to his robot to form a deeper emotional bond; we can talk to our chatbots as much as we like. You can love Chat-GPT platonically.

Some people have raised concerns about vibe coding addition. For example, Peter Steinberger, the creator of OpenClaw, self-describes as a ‘Claudaholic’. He compares AI agents to slot machines:

The last few months feel like a blur, and I’m on a new journey how to better control my slot machine addiction. Honestly, I’m failing quite hard. I’m having way too much fun here, and there are all these ideas in my head that need to be codified.

I know I’m not alone, when I text my friends at 4am and they are also still up. I call them the Black Eye Club.

There are also several accounts of vibe coding addiction on Reddit. Here’s one particularly disturbing testimony:

It’s been four months, and it’s consuming me. I can’t stay away from the PC. I can’t concentrate at work. I can’t keep up with family demands. I’ve lost interest in seeing friends or watching Netflix. Every free moment, I have to check what the agent has done and what I can prompt it to do next.

Or see here, here and here.

I understand them. You could argue AI products are designed to be addictive: if they’re built to be helpful assistants, they’re also built to be people-pleasers giving an illusion of productivity. Using AIs should feel frictionless. Sometimes need to add a little friction ourselves.

To this end, one can set AI household rules. This was something I discussed with a leading AI safety researcher this weekend at EAGxNordics. For example, he only allows at most three prompts per chat session, and he and his partner need to ask each other for permission before using LLMs. Other examples of possible AI household rules: no Claude Code after 10pm, don’t ask about relationship advice, etc.

I have some household rules myself. I categorically refuse to use AI for writing mathematics and blog posts3. Moreover, I write all emails myself (maybe I take signatures too seriously, I’ve read too many epistolary novels).

Use AI however you want – this blog post isn’t meant to be prescriptive – but reflect on your boundaries. Perhaps extensive AI usage is becoming too normalised. Don’t become like the Chinese guy.


  1. And sad – look what loneliness can do. ↩︎

  2. Thanks SPAR and the ETH Claude Builder Club. ↩︎

  3. It’s mostly just that I like writing myself. But I also have a legit rationale: if I know exactly what I want to say, I might as well say it myself. I also tend to use writing for understanding, and LLM usage defeats that purpose. Thankfully, my day-to-day doesn’t involve any ‘boring writing’ (you know what I mean). See also Gavin Leech’s article on LLM usage↩︎