There’s a lot of noise about AI. People get heated about regulation, such as California’s recently vetoed SB-1047. People get excited about all the amazing things AI can do now, and smug about the things it still can’t.
The sheer volume of AI noise (in both senses) tends to compress observations about AI progress into a few templates. Much like how political positions tend to glom together as group affiliations work their magic, the most popular AI positions form their own gravity wells.
As I see them, the positions are something like this:
The Doomer: Intelligence is an extremely powerful force, such that having a lot more of it leads predictably to total domination and control. Humanity is currently working on something that will be smarter than us. So if things go wrong at all (and they will - this is engineering, after all), we’ll be dominated and lose control over the future.
The Cynic: Oh, look, the capitalists have a new toy. As usual, they’re busily fleecing each other with overhyped slide decks, and crushing the little guy as a side effect. ChatGPT is a low-quality plagiarism machine that wastes tons of water, and “AI art” is soulless and awful to look at, yet risks harming human artists. It’s like if Bored Apes were also scabs.
The Hustler: I tried the newest AI offering, and I am BLOWN AWAY. Working out/writing software/tapdancing will NEVER be the same. Thread below for ten most mind-blowing insights, or subscribe for weekly updates. Don’t fall behind!
The Evangelist: Modern society is falling apart. Our institutions are sclerotic, it’s impossible to build anything without drowning in red tape, and the birth rate is apocalyptic. The same forces that oppose progress everywhere are coming for AI in particular. But AI holds too much promise to let them win. AI is just math/speech/code anyway. It should be free!
Doomers and Evangelists are natural enemies, though they both agree AI is a big, potentially transformative deal. Hustlers and Cynics are sort of opposites too, in that one wants to raise AI’s status and the other wants to lower it, but they mostly are trying to drum up traction in their own non-overlapping bubbles, so sparks don’t fly too often.
So, who has the right of it? Unsurprisingly, none of these caricatures I made up strikes me as correct. So without further ado, it’s time to shred the strawman.
Doomer Wishlist
I’m most sympathetic to the Doomers. They reason well, don’t play politics much, and their high-level view is eminently reasonable. If it’s true that superintelligent AI is around the corner, and that we have no obvious method of controlling it, then, yeah, that’s pretty scary.
The main issue I have with Doomers is that they rarely chill out. Specifically, events keep happening that make me worry less, but the angle from prominent Doomers is usually that the correct move is to worry more.
For example:
Oldschool Doomer forecasts imagined AIs which were brutally optimized for alien values. But cutting-edge AI specifically predicts what humans would most likely type on the internet. We get a lot of human values content there for free!
Doomers worried about algorithmic breakthroughs leading to a situation where powerful AI could be run on cheap hardware, such that at least one such AI would inevitably escape containment. But modern AI companies are trying to secure whole new power plants to train the next generation of models.
Doomers often freak out about the pace of progress, and what it might mean if extrapolated further. But they don’t take much comfort from forecasts suggesting slower progress, such as the median predictions of superforecasters.
Again, I’m probably closest to the Doomer camp of these camps that I’ve made up. But I can’t subsist on a media diet of exclusively AI existential risk worriers, because they usually only seem to revise their worry level upward. Mine has gone up and down, and mostly a little more down.
Cynic Wishlist
Doomers are, to a first approximation, my people. My favorite AI bloggers are all Doomer or Doomer-adjacent, and I feel at home with their thinking style. So I feel a little bad criticizing them much. I feel bad about criticizing the Cynics for the opposite reason: it’s too easy, since I don’t like them very much.
Still, I do think the Cynics have a useful role to play, and I’d like them to play it better. You really shouldn’t take everything large, powerful companies do at face value, and Pareto-optimal trades, in practice, are earned by fierce political advocacy.
So, how might I like them to change?
Cynics aren’t very scope sensitive. A frequent recent complaint is how much water LLM inference uses, maybe because the idea of dumping out water is evocative. But how many cups of water does a typical hot shower use? If LLMs didn’t exist at all, would any environmental metric be meaningfully different? So far, I doubt it.
Cynics focus a lot on harms to the little guy, but rarely consider benefits to the little guy. Job automation is scary, but even currently existing AI lets programmers skip the most boring parts of their jobs, makes formatting a breeze, helps people make meal or exercise plans, etc. It’s not at all obvious that random middle class (or lower class) people’s lives would be better if LLMs suddenly didn’t exist.
Cynics don’t look ahead very much. They make fun of things like AI’s inability to draw hands, then look silly when that issue is resolved mere months later. If you only ever look at something’s most embarrassing failures, you’ll underestimate it to your peril.
Hustler Wishlist
Shucks. Doomers I respect, but have finicky differences with. Cynics I feel like I understand, but that understanding makes me frustrated with them. But Hustlers… I don’t know. The whole mindset (grindset?) is alien to me.
But sure, here’s how I’d like them to be different.
Hustlers are breathlessly excited about, like, everything. It’s exhausting. Yes, OpenAI’s newest model is genuinely impressive, and I’m sure Sam Altman’s 7 Must-Have Breakfast Ingredients are plenty nutritious, but not every incremental improvement is a watershed moment.
Hustlers seem to live in a world where the only bottlenecks to anything are exactly what AI improves. So there’s no reason why, when GPT-eiei0 (just following their naming convention to date and guessing ahead) comes out, everyone shouldn’t immediately start a SAAS business or whatever. I assure you, hustlers, everything still requires plenty of schlep: GPT voice mode helped me install a car seat a little bit, but my mom helped a lot more.
Hustlers are way too credulous about rumors. Most of the time, the reality falls far short of the whispered hype.
Of course, I’m not being fair to Hustlers. They’re optimizing for engagement over a broad audience, and this stuff is great for that. And whatever, they help me learn when there’s a cool new product. It’d just be nice if I didn’t have to go play with the product myself, with 90% confidence that they’ve oversold it.
Evangelist Wishlist
Look, I’m excited about AI. And I’m sad about all the various cool stuff that should happen but doesn’t, due to the dysfunctions of the modern state. So many of the Evangelist’s points resonate with me. Especially that it’s better, when people are trying to make the world a way cooler place, not to get in their way.
But of all four groups, the Evangelists are the ones I have the largest problems with.
Evangelists lie. Or, if not, they’re misinformed to a mind-boggling extent. For example, in this letter to the House of Lords, a16z says that “recent advancements in the AI sector have resolved” the issue where AI internals are not well understood. Which is probably pretty surprising for the entire field of AI Interpretability, which includes work at all the leading AI labs, and has so far only scratched the surface of how modern AI thinks.
Evangelists misrepresent their opponents. Again, I’m not sure if this is intentional or not, but so often I see a supposed slam dunk against Doomers that totally fails to engage with any of their arguments. Which is weird, because many of those arguments are pretty simple.
Evangelists (often) have conflicts of interest, and act as if they don’t. If you run a firm that will benefit financially from low AI regulation, or are building a product that depends on AI progress, those facts will obviously shape your opinions about AI. Which is fine! But while Doomers bend over backwards to describe potential conflicts of interest, Evangelists more often go in polemical directions and act like their espoused views are just common sense.
I do think there’s an important place for Evangelists in the discussion; someone needs to remind us about the promise of AI. But I wish they’d operate in better faith.