The Returns Sandwich
Level at the bottom, rocket at the top
You may have seen the studies, 1,000 years ago in AI time, that access to LLMs improves low performers. The theory is pretty simple: if you’re really bad at something, and AI is mediocre at it, then AI can help you do better. You don’t have to take an academic paper’s word for it, either. There are plenty of examples.
People who don’t speak fluent English benefit from using LLMs to compose business emails, so they can better interact with English-speaking clients
People who don’t know how to code can use LLMs to produce custom graphs, charts, or data analyses, which would ordinarily require writing code
People who don’t know how to draw can use AI to bring visual ideas they have to life1
It’s a common human experience to complete most of a project, but struggle for a long time finishing the last piece: the skilled programmer who has no eye for design and never gets around to styling their app, the academic who gets bored and overwhelmed formatting their data tables in four different ways to submit to various journals, the talented poet who doesn’t know how or where to submit their work for publication. So a quick and easy way to automate a bottleneck - even at a mediocre level - can make a big difference. If you aren’t occasionally thinking of using LLMs to shore up your weaknesses or do the most frustrating/boring parts of day-to-day tasks, I think you probably should!
Once you’re pretty good at something, though, LLMs provide limited utility. I’ve pasted several blog post drafts into Claude or ChatGPT, but I’ve never gotten very useful advice; their suggested rewrites are (to me) obviously worse prose than mine, and conceptually they usually suggest equivocating or watering everything down. And while LLMs were really useful in helping me learn about abstract algebra or LLMs themselves, once I got any familiarity I caught them making all kinds of obvious mistakes. AI just doesn’t help all that much in the intermediate wasteland; its skill level at most things is somewhere between total beginner and amateur.
Returning to our examples from before:
A decent English speaker probably shouldn’t get ChatGPT to write their emails; they’ll seem weird and impersonal, and likely lead to missed opportunities
Someone familiar with R should probably just write the code for whatever statistical analysis they need - LLMs are likely to get these wrong in subtle ways
A competent artist can get much closer to their specific vision by drawing something themselves than by prompting an image model
These issues existed for the low performers too, but even a spotty output is often better than no output at all. Still, once you can muddle through something, an LLM becomes a frustrating companion. They almost get it, but frustrating small mistakes compound, and it ends up best to just do the thing yourself.
But - and this is the sandwich - once you’re great at something, LLMs become useful again. With mastery, you aren’t confused by LLM mistakes and have the vocabulary to correct them, and your elevated skill lets you outsource whatever piece of a workflow you want, while seamlessly picking up the parts that AI still can’t handle. So:
A masterful wordsmith can ask LLMs for advice on how to reword the weakest parts of an email, then take that advice as a jumping off point, then use the LLM to evaluate the entire email for any lingering typos or ambiguities
A masterful programmer can write excellent requirements to break coding tasks into small chunks that an LLM can actually manage, giving architectural advice and keeping the AI on track like one would a very junior dev in training
A masterful artist can use AI tools to cheaply test (or brainstorm) lots of different layouts for a composition, then draw a variant on their favorite from scratch
So strangely, modern AI tools are most useful for the stuff you’re worst at, and for the stuff you’re best at, while being frustrating and counterproductive for what’s in between. Once you see it this way, a lot of AI discourse suddenly makes more sense. There are starry-eyed beginners brimming with excitement that they can do things now that they never could before! There are lots of justified cynics who don’t understand the hype, pointing out that AI keeps making really dumb mistakes all the time. And then there are virtuosos claiming that AI makes them 5 times faster at something they’re already among the best in the world at.
What happens to these dynamics as AI gets smarter is left as an exercise for the reader.
I don’t mean to wade into the AI art debate here - I’m thinking of a guy I met at a party who excitedly showed me a DALL-E 3 image of an enlightened mouse he’d dreamed up. There is no universe where he either bothered to draw or commission this silly thing, but he still got the joy of giggling at it.

