Increasingly CEOs of software organizations are hearing magical things about the powers of AI tools. When you hear that an AI can build a web app in seconds or rewrite git, it can understandably seem like software is now so cheap we can replace most developers. I think the reality is nuanced.

What is the Agent like?

Let's name our AI Hal. What is it like working with Hal?

Hal can generate boilerplate fast, entire functional (simple) applications. He does not make most of the typos humans do. He is an expert in languages and constructs that are widely repeated across the internet. This is true in every language, making him brilliant at porting code from one to another. He can read at superhuman speed, summarizing and pattern matching across massive swaths of code, making him an excellent research partner. He can read astoundingly dull output for patterns and find the important parts. He can do all this with a flurry of agent friends, affordably. His output rate gives an illusion of brilliance.

But on closer inspection, Hal's work fails in subtle ways. He seems to have no human common sense with requirements that are left unspecified. Sometimes he starts down a wrong solution path and gets so bogged down he can't recover. His solutions can be great-looking but fabulously wrong hallucinations, especially when using technology that's not highly cited across the internet.

It's critical to understand why he succeeds at some things and fails at others. Hal is like a coding buddy who has great focus on a very small context, and very broad human knowledge, but lacks everything in between. He doesn't remember what he worked on last week, which practically means, he's a bad long-term maintainer. After only an hour of work he loses his grip on the current task. For him to remember that task, his team, his company, his architectural goals, or his general mission, he has to be reminded over and over. Unlike a human, he is missing the layers of memory, context, relationships, and interactions with the real world that would make his work valuable. And while his knowledge is incredible, his lack of value judgments makes his hard for him to produce work with above-average quality.

(I should emphasize that these limitations are not hypothetical. I have developed features and programs with it and experienced these flaws.)

AI's limits

Another way to express the limits of AI is that its world is too small. And here I mean more than just "the context window is too small". The world a software engineer lives in is full of things that are hard or impossible for an AI to keep "in context":

  • What value am I adding with this task? Knowing this guides many implementation decisions and trade-offs. AI can only make value judgements as good as its training data, and that often will be wrong for you.

  • How does it fit with our architectural vision? This determines whether your complex product is robust to scale, change, and attack, or a Dr. Seuss architecture that falls apart in real life. The most subtle AI failures I see are architectural misunderstandings.

  • What problems or possibilities exist in our product that require novel solutions or technology? AI struggles with problem never before seen on StackOverflow. And it will never champion novel product ideas based on expert knowledge, developers will.

  • How can I trust that the results match our goals? The specifications that come from Product Management are never complete, developers make human assumptions and fill in the gaps all the time based on what they know of the people and the business.

  • What about costs and failure modes? The AI has no wallet, and often does not know what errors to be afraid of and protect against.

All of these require awareness of people and the outside world. They also require wisdom. AI can identify bad solutions and offer pretty good ones, but without more information, it cannot wisely make the tradeoffs an expert needs to make. AI is not the total expert it seems; it is more like an expert at averageness.

Models and their tooling environment will continue to improve at some of the failures I've seen. Better models will help it not leak database connections (a bug I caught it making today). Sub-agents, web search, RAG/MCP (or whatever succeeds them), and model retraining have a long way to go and will give the models a bigger world. But it will never be big enough for everything I noted above.

The 3 Types of People

As we move into an AI-filled world I believe 3 types of software developers will emerge.

  1. Those who refuse to use AI except for trivial things. Some can be good and effective for years to come, specifically those who work with architecture, novel technology (including systems sensitive to scale, cost, or security), and those who bridge the gap between product requirements and reality. Nonetheless, without taking full advantage of AI, these people are leaving great potential untapped. Some in this group that simply take specifications and implement them are, indeed, in danger of being replaced by AI.

  2. Those who become over-reliant on AI. I believe this will be by far the largest group. Their skills will atrophy. Some of those skills may genuinely not be so critical anymore, like Bash or Terraform syntax fluency. Many will still be needed; AI will do it right 90% of the time, and the critical 10% when an expert human needs to understand it, they will be stuck or cause failures. These people will hold on to their roles for years to come because of apparent success, but will be unable to do that final 10% of what makes someone a truly senior technical player.

  3. Those who use AI effectively. They will multiply their effectiveness. Their key feature will be the discipline of understanding what the AI is doing for them, learning from it, taking ownership over what it generates, and knowing when not to use it. It's a matter of discipline because trusting result from AI is an incredible temptation that must be fought.

Takeaways

My message to executives would be this. If you are satisfied with software that has a polished look but is internally clunky and fails in unexpected ways, and the software you want is simple anyway, go ahead and have AI be the expert for you. If you want quality software, if you want to avoid security holes and support issues, if you want to be able to pivot and add features without a quagmire of broken changes, and if you want a team that scales up and adds value for you autonomously, then you need software engineers.

For executives, ask yourself:

  • When I hear claims of massive productivity ("AI did this 1 week task in 1 hour!"), is it really about changes in our core product that offer long-term value?

  • Are large features and architectural initiatives really being completed so much faster, or is momentum being slowed by complexity? What do my most trusted technical people have to say about the quality of work coming out of AI?

  • Am I encouraging and training my people to be effective, but not over-reliant, on AI?

For technical people, ask yourself:

  • Am I learning to use AI effectively?

  • Am I learning to add value to my organization in ways the AI fundamentally can't? How can I move toward this?

Keep Reading