Max Kanat-Alexander argues that the rapid pace of AI coding agent advancement creates uncertainty about where to invest. The key insight: investments that help human developers will also help AI agents succeed. Organizations should focus on standardizing development environments, improving validation and testing, making codebases more structured and reasonableand dramatically increasing code review capacity and quality.
Max opens by describing the whiplash facing developer experience teams: every 2-3 weeks brings new AI capabilities that challenge previous assumptions. CTOs and DevEx leaders are asking what investments will still be valuable at the end of 2026. Many have defaulted to "just coding agents" as the answer, but that is not sufficient.
Development Environment Standards: Use industry-standard tools the same way the industry uses them. This is what is in the training set. Fighting the training set with custom package managers or obscure languages reduces agent effectiveness.
CLI and API Access: Agents need either a CLI or an API to take action. Computer use exists, but text interaction is what LLMs understand most natively. Where accuracy matters, use the interface that maximizes effectiveness.
Validation and Testing: Any objective, deterministic validation increases agent capabilities. Clear error messages are essential - agents cannot divine meaning from "500 internal error." However, asking agents to write tests on untestable codebases produces meaningless tests.
Agents work better on better-structured codebases. Legacy enterprise codebases that no human can reason about are equally opaque to agents. The agent cannot read minds or attend verbal meetings. Anything not in the code needs to be written somewhere accessible - especially the "why."
Writing code has become reading code. With agentic coding generating far more PRs than ever, code review itself is becoming a bottleneck. The goal is not to shorten overall review time but to make each response fast. Reviews must be assigned to specific individuals with SLOs - posting to team channels leads to one person doing all reviews.
Bad codebase + confusing environment leads to agents producing nonsense, developer frustration, rubber-stamped PRs, and degraded codebases - a vicious cycle where agent productivity decreases over time.
Conversely, improving agent productivity creates a virtuous cycle of accelerating improvement. Now is the time for these investments because software engineering velocity differentiation will be greatest for companies that can execute.
"What's good for humans is good for AI."
"The future is super hard to predict right now."
"You want to use the industry standard tools in the same way the industry uses them... because that's what's in the training set."
"I'm a programming language nerd. I love those things. I do not use them anymore in my day-to-day agentic software development work."
"Any kind of objective deterministic validation that you give an agent will increase its capabilities."
"The agent cannot read your mind. It did not attend your verbal meeting that had no transcript."
"Every software engineer becomes a code reviewer as basically their primary job."
"If you don't have a process that is capable of rejecting things that shouldn't go in, you will very likely actually see decreasing productivity gains from your agentic coders over time."
"In the 20 plus years that I've been doing this, I have never found a way to teach people to be good code reviewers other than doing good code reviews with them."