An LLM is like a Well-Read, Skilled Junior Who is Missing a LOT of Context
Want to get notified about new posts? Join the mailing list and follow on X/Twitter.
I know the capabilities of modern AI might make you feel insecure if you’re at the novice stage of any skill or line of work. Like anything you learn how to do, AI will soon do better.
But as you develop deep domain expertise and transition into working on things that haven’t been done or even thoroughly imagined yet, you’ll start getting the sense that an LLM is like a well-read, skilled junior who has impressive abilities but is missing a lot of context that you’d only gain through years of hands-on experience working in the domain.
They can be often be helpful for
A)</spam> compiling ideas while brainstorming and
B) implementing things that have been fully scoped down and spec’d out.
But the chasm between A and B is absolutely massive. Novices often don’t realize just how big that chasm is, and they don’t realize how much of the necessary context for crossing this chasm isn’t public knowledge. This is why even if you have a junior who is super skilled and super well-read, you can’t just hand them a senior-level project and expect it to turn out well. You have to provide tons of coaching and keep them on the rails.
The interesting part is that if you don’t continually keep them on the rails, the junior might think it turned out well, when in reality their end result just completely unusable. Not because they got stuck on some technical issue but because what they built just doesn’t really solve the problem. They just don’t understand the core of the problem well enough to solve it independently. They don’t see it in full.
Now, I know the natural rebuttal to my argument here is “but this is just the current state of AI, wait a couple years and it will be able to cross that chasm from A to B.” But I think that’s still underestimating the size of the chasm. At a senior-enough level, that chasm contains 99% of the work. And moreover, I don’t think LLMs are even at a point where they can do A and B reliably. In my personal experience, AI coding tools choke on anything outside of standard logic that you’d commonly see in other codebases (and there is shockingly little of that in what I’m working on). Basically, I think AI is still a long, long, way off from covering the full pipeline, or even a substantial chunk of it.
And even if, let’s say, there comes a day when LLMs can reliably cover the full pipeline… I don’t think that should be worrisome to anyone who is building something with high future value. When you’re in that setting, building stuff expands your opportunity surface area to build even higher-value stuff. The more you do, the more there is to do. The more of your work AI takes, the more work you have left to do. But unfortunately, things with high future value are the hardest to automate because they typically haven’t been done or even thoroughly imagined yet. So basically you’re safe from AI doing your work, and you’re sad about that.
Want to get notified about new posts? Join the mailing list and follow on X/Twitter.