Astral Codex Ten has a post up about the “Coffeepocalypse” argument, which he provides this example of. In my words, the Coffeepocalypse is this:
Once upon a time, people thought of coffee as a dangerous drug that might destabilize society, and fought to repress it. But as coffee was proven to be useful (and specifically to increase people’s intellectual output), the naysayers were eventually defeated, and today coffee is accepted by everyone. By analogy, the same thing is going on with AI today: some people are irrationally worried about it, but as it becomes clear that AI is so useful (and specifically increases people’s intellectual output), it will reach fixation and the naysayers will look silly.
Scott Alexander responds to the argument in the general case, offering the totally reasonable observation that one historical case of something happening doesn’t mean that it always happens, or even usually happens. He poses this question:
So my literal, non-rhetorical question, is “how can anyone be stupid enough to think this makes sense?”
I’d like to try to answer that question.
What’s happening
Recently, someone asked me what I thought the medium-term future would be like. I tried to answer thoughtfully, but kept running into forks in the road where I wanted a whiteboard, or perhaps a shot of whiskey. First, it seemed necessary to split the probability distribution over futures into AI fizzle and singularity cases, with caveats about what either of these meant, and the acknowledgement that it might blur in the middle. But after that fork in the road, things weren’t any easier; the world still looks very different in unpredictable ways if no model gets better than GPT-5, and in the event of an actual singularity, the whole point is there’s no telling what happens.
As I was trying to explain what I thought would happen, I felt a sort of helplessness wash over me. First, that I probably looked stupid and indecisive, and second, that no matter how masterfully I hedged, I was basically writing a gigantic cursive “I don’t know” in the air. That deep gulf of “I don’t know” is a profoundly uncomfortable experience. Fortunately for human comfort, human beings are built with a way out of that experience: it’s called narrative.
The coffeepocalypse argument isn’t actually an argument - the conclusion is baked into the premises, and the similarities between coffee and AI are more like aesthetic rhymes than rigorous analogies. It’s rather a shorthand way to say: “Keep calm and carry on, brother - all this has happened before, and all this shall happen again, and the correct thing to hold in your mind is a drink that makes you happy.” It’s easy to respond to it as an argument because it’s dressed up with historical facts (or vague gestures related to those facts) and borrows the aesthetic of intellectual discourse. But it’s actually just an affirmation: the ASMR experience of “smart person explains why it’s all going to be okay.”
So it’s just dumb?
Actually, maybe not! I think the practice of “provide a narrative frame for people to process scary/confusing situations” is often valuable.
Like, suppose you don’t know that much about AI, but people are up in arms about it in your general social bubble. It all kind of blends together to you, but people are saying:
AI is going to cause mass unemployment/destabilize the economy
AI is going to cause a huge erosion in truth, as deepfakes and AI-generated text overrun everything
AI is going to literally kill everyone somehow
AI is going to ruin school by making homework too easy to cheat on
And various other things of varying severity. You can try following all these threads but each of them is extremely complicated, and you’re as likely to find hype beasts or snake oil salesmen as good sources of information, if you’re not already plugged into spaces that talk about AI all the time.
So basically all you can process effectively is some kind of simple metaphor. A pre-existing slot in mental space to put “AI” next to, so you can stop worrying about exactly what’s going to happen.
Incidentally, I think that’s why people react with sadness when the “argument” is “debunked”. It’s not that the coffee analogy (or whatever other analogy of the week) is a rigorous argument that was unfortunately proven wrong. Rather it’s a convenient lens to reduce cognitive load, and producing a bunch of other lenses to choose from piles the cognitive load right back on. But there are always more lenses, and they just keep telescoping up and down in size. If it’s not something like an obsession to get to the bottom of what’s going on (and maybe even then), there’s not much hope in rigorously evaluating all the arguments and arriving at a good probability distribution over the future. Nor is that, among most human beings, a popular sport.
Non-argument is bad argument
Just to really hammer in what I think is going on, the coffeepocalypse thread ACX linked to isn’t actually an argument. It’s a bald assertion that feeds into the general gestalt of a pep rally. Rather than an argument that goes something like “AI has a similarity to coffee, coffee turned out fine, ergo AI will turn out fine” it’s more like a sermon that says “Look, guys, AI is going to be fine. You know how we all relate to coffee now, and never even think about the conflicts that once cropped up about its use? AI will be like that too.” It’s a raw claim, presented without rigorous justification.
I think “no, it probably won’t be fine” is a reasonable reply to that claim, and all the better to justify that reply. But “your logic doesn’t work” is itself invalid against something with no logic. Like, “actually you’re mistaken, given that we’ve lost 4/5 of the last games against our rival team” isn’t a very useful reply to an auditorium cheering that they’ll win the big game later. They’re not making an argument. They’re managing narrative and emotion.