It’s been a while since I’ve had a crisis of confidence. But tonight, I read Steve Yegge’s recent post, “Revenge of the Junior Developer.” In it, he lays out a vision of a future—only two years away—in which nobody writes code anymore. I felt a cocktail of anger, fear, worry, and disbelief. Then, I thought through the implications, and came up with a plan of action: Kerrick’s Wager.
First, a bit of context.
Steve Yegge on Agentic Software Development
Steve Yegge asserts that the future of software development will not involve designing software architectures or writing code. Instead, he says that within two years individual contributors will be managing fleets of AI agents. These agents will form an “AI org chart” under each developer. Some agents will manage the rest, who are in turn fixing bugs, creating features, and writing tests and documentation. Developers will manage the agentic managers. This stands in stark contrast to what some now call “manual programming” — code being written and read by humans.
Is Steve Yegge Wrong?
I cannot know whether Steve Yegge is right or wrong, and I do not know enough to make a prediction. If the future of productivity is agentic software development, my current skillset will be significantly devalued. Instead, businesses will value people with the skills to use these tools effectively. If the future of productivity remains manual programming, my current skillset remains valuable. It’s tough to know what I should do, since I cannot predict the future. But this is not the first time humanity has considered an unknowable assertion.
Pascal’s Wager
Blaise Pascal, a seventeenth-century philosopher, came up with an argument about the possible existence of a God:
Because the Abrahamic God is metaphysical and cannot be measured, reason cannot determine whether He exists. If He exists, living your life according to His religion means you get everlasting pleasure in Heaven, with the alternative being everlasting torture in Hell. If He does not exist, living your life according to his religion means a minor sacrifice of certain luxuries and pleasures, with the alternative being a more self-determined life. Because you are comparing a relatively minor gain or loss (He does not exist) with an infinite gain or loss (He exists), the expected value of betting for existence and living life as though He exists is much higher than betting against.
While this argument is riddled with problems when used to make decisions about faith, it is a useful template for thinking about bets about unknowable outcomes in the face of large risk. Apply it to the future of software development, and you get a corollary: Kerrick’s Wager.
Kerrick’s Wager
Because the future is unknowable, reason cannot determine whether the future is agentic software development. If it is, learning new skills to take advantage of these new tools means you have value to a business that will earn you a living wage, with the alternative being irrelevance leading to joblessness. If the future remains manual programming, learning these skills means you wasted a few dozen hours and a few hundred dollars trying another Fad of the Day, with the alternative being a few nights out on the town. Because you are comparing a relatively minor gain or loss (the future remains manual programming) with a life-changing gain or loss (the future is agentic software development), the expected value of betting for agentic software development and learning skills as though it is the future is much higher than betting against.
What Now?
I could dedicate my nights and weekends in April to enjoying the luxuries of a relaxed life with the companionship of my wife, another senior software engineer. Or, my wife and I could both invest time and money to learn the skills that will become necessary if and only if the future of software development is agentic:
- Read books:
- 13 hours: Designing Machine Learning Systems by Chip Huyen
- 16 hours: AI Engineering by Chip Huyen
- 8 hours: Prompt Engineering for LLMs by John Berryman and Albert Ziegler
- ?? hours: Reviewing an early copy of Vibe Coding by Gene Kim and Steve Yegge, if they would send me a review copy.
- Experiment with agents:
- $10 per hour: Claude Code
- $20 per month: Cursor
- $15 per month: Windsurf
- Free: Aider connected to LM Studio running a local LLM on an M1 Max MacBook Pro with 64GB RAM
I’m taking the positive-EV bet. Are you?
14 Comments
Really interesting way of presenting both sides. I am fascinated with AI and believe you should become very familiar with it. I don’t believe that manual coding will be gone in the next 2 years. I believe the field will evolve for individuals who stay ahead of technology. This opinion from someone who functioned before the first cell phone or computer was developed. So proud of your insight and drive.
Mad respect. This is a compelling argument. It’s ultimately a risk management strategy, and a smart one.
The only observation I’d add is that agentic programming is unexpectedly and *highly* addictive. Many people are discovering this independently. The word ‘addiction’ is thrown around a lot.
So if you take Kerrick’s Wager, you will soon realize the new way is counterintuitively more fun than manual coding, once you get the hang of it after a day or so.
That’s great to hear! Part of the reason I felt a cocktail of unpleasant feelings was that I didn’t want to lose the sheer joy of manual programming. If agentic programming is just as addictive and fun, I have hope.
Great thinking Kerrick! I’m onboard with “Kerrick’s Wager”!
The concept of managing AI agents instead of writing code is intriguing, but I wonder what skills will become most important in that new world. Will it just be about overseeing the right AI tools, or will there be a deep understanding of how those agents work that developers still need to have?
I really appreciate how you framed this uncertainty as a wager—it’s a pragmatic way to handle a fast-changing landscape. I’ve also started brushing up on AI-assisted workflows while continuing to deepen my fundamentals, just in case manual programming still has a long runway.
I’ve been thinking about this a lot recently, especially with how rapidly AI tools are advancing. I wonder if the future of software development will actually be more of a hybrid approach—where we still write code, but AI helps automate the more tedious aspects. That way, developers can focus on higher-level thinking and innovation.
I wonder if, as developers, we will eventually end up learning how to ‘program’ AI itself rather than traditional software. It’s a huge shift, but maybe not as drastic as it seems.
Love the philosophical framing here—especially since we can’t really opt out of this shift if it happens. Kerrick’s Wager feels like a smart hedge: stay grounded in manual skills but experiment with AI workflows.
This reminds me of how DevOps transformed infrastructure roles—maybe agentic programming will be a similar shift rather than a replacement. Kerrick’s Wager feels like a smart hedge against either direction the future takes.
I really like how you’ve framed the dilemma—being caught between two possible futures. I wonder if the key will be adaptability: those who can effectively manage both manual and AI-driven programming might have the edge.
It’s fascinating to think about a future where coding itself becomes a managed task by AI. While I see the potential for huge productivity gains, I also wonder about the creative and strategic roles that human developers will continue to play. Will AI replace the ‘architect’ or will it just become another tool to enhance human decision-making?
The idea of managing an org chart of AI agents is fascinating but also a bit dystopian. Your wager reminds me of Pascal’s Wager—it’s a thoughtful way to frame the uncertainty and make practical decisions without pretending to predict the future.
Kerrick, I really appreciate how you framed the uncertainty around AI-driven development as a kind of Pascal’s Wager—it’s a thoughtful way to navigate a shifting landscape. It resonates with how a lot of us are feeling: not sure what the future holds, but knowing we need to adapt either way. The idea of managing AI org charts is wild, but your approach to upskilling without abandoning core programming fundamentals feels like a smart hedge.