Mitchell Hashimoto (co-founder of HashiCorp, creator of Vagrant, Terraform, and Ghostty) explains his "always-on agent" workflow and delivers a sobering take: Git, GitHub, and the open source PR model are fundamentally broken for agentic infrastructure and need to change. One of the most grounded, practitioner-level takes on AI-augmented engineering available.
Mitchell's core operating principle: there should always be an agent doing something while he's working. Not sleeping, not waiting for input. The model is parallel operation — if the agent is coding, he's reviewing. If he's coding, the agent is planning the next task. He explicitly says he's the "mayor" in this workflow — not running autonomous gastown-style systems, but maintaining active oversight while the agent does continuous background work.
For high-confidence problems he runs one agent. For genuinely hard tasks with uncertain outcomes, he runs Claude and Codex simultaneously and picks the winner. The competition approach serves two purposes: it hedges against wrong paths, and it gives him a quality signal on the problem space.
"I endeavor to always have an agent doing something at all times. While I'm working, I want an agent planning. If they're coding, I want to be reviewing. There should always be an agent doing something."
Mitchell bookends his workday with 30-minute sessions dedicated entirely to identifying background tasks for his agent. Before leaving the house, before stopping for the day — he asks: what's a slow, low-urgency task my agent can run while I'm unavailable? During his hour-long drive to the studio for this recording, his agent was doing library research. He set it up in advance, it ran without him, he came back to results.
"Before I stop working, before I leave the house or something, I spend 30 minutes — what can my agent be doing next? What's a slow thing my agent could do for the next time?"
The most striking section of the interview. Mitchell argues that the entire PR-based contribution model is incompatible with agentic development at scale, and this isn't a future problem — it's happening now.
The root cause: the natural back-pressure that kept PR volume manageable was effort. Submitting a good PR required time, judgment, and care. Agents eliminate that friction entirely. The result is a volume problem that existing tooling wasn't designed for: merge queues can't keep pace, maintainers are overwhelmed, and large companies are reportedly looking at rearchitecting their monorepos.
"Git and GitHub forges in their current form do not work with agentic infrastructure today. And it's imminent today."
"The natural back pressure in terms of effort required to submit a change — that was enough. And now that has been eliminated."
Mitchell saw a step-change in AI-generated PR volume on his own projects, particularly Ghostty. The dead giveaway: "the way Claude opens a PR is pure AI" — boilerplate structure, predictable wording, unmistakable pattern. His solution was a vouching system: PRs can only be opened by contributors who've earned trust. Similar to the PIXI model, which also has explicit anti-slop mechanisms.
He's thought about the logical extreme — if agents can build anything, do you even need open source? He doesn't subscribe to this view, but he sees the argument clearly.
One of the clearest frameworks in the interview: tests go from "best practice" to "the mechanism by which agents know they're done." For an agent to self-validate its work, it needs something to validate against. Current test coverage levels — even in well-maintained codebases — aren't high enough to give agents reliable signal. CI/CD pipelines need to evolve to serve this function.
"To make an agent better it needs to be able to validate its work. And so tests go from nice-to-have to the mechanism by which agents know they're done."
Mitchell explicitly calls out Docker and Kubernetes as systems engineered for a certain scale that weren't designed for the volume of non-production agent workloads now hitting them. Companies going all-in on agentic tooling are experiencing churn levels — in terms of branch creation, environment spin-up, and job queuing — that exceed what human teams generated by an order of magnitude.
His hiring philosophy hasn't fundamentally changed — the best engineers he's known context-switch the least and tend to have "boring" private backgrounds (they just build). Competency with AI tools is now table-stakes, but it doesn't replace focus or judgment. He would expect everyone on his team to be running an agent continuously.
"I endeavor to always have an agent doing something at all times."
"Git and GitHub forges in their current form do not work with agentic infrastructure today."
"The natural back pressure in terms of effort required to submit a change — that was enough. And now that has been eliminated."
"To make an agent better it needs to be able to validate its work. Tests go from nice-to-have to the mechanism by which agents know they're done."
"The best engineers are the ones that context switch the least."
| Time | Topic |
|---|---|
| 00:00 | Intro |
| 07:19 | HashiCorp origins |
| 15:52 | Early cloud computing |
| 18:22 | The 2010s startup scene in SF |
| 23:11 | Funding HashiCorp |
| 25:23 | The "Hashi stack" |
| 35:28 | An early failure in commercialization |
| 38:28 | The open-core pivot |
| 48:08 | Taking HashiCorp public |
| 51:58 | The almost-VMware acquisition |
| 59:10 | Mitchell's take on AWS, GCP and Azure |
| 1:06:02 | AI's impact on open source |
| 1:07:00 | Ghostty |
| 1:19:13 | How Mitchell uses AI |
| 1:28:36 | Open source + AI |
| 1:31:46 | The problem of Git and monorepos |
| 1:39:57 | Mitchell's hiring practices |
| 1:47:52 | Mitchell's AI adoption journey |
| 1:50:41 | Advice to future founders |
| 1:53:20 | What's changing for software engineers |
| 1:55:03 | Closing |
2-hour interview on The Pragmatic Engineer. Covers Mitchell's origin story (self-taught PHP developer, Ruby on Rails consultancy, failed UW research project that became the HashiCorp notebook), the founding of HashiCorp, the Vagrant origin story, working with AWS/Azure/GCP, and — in the second half — his full take on AI agents, Git's future, open source trust, and sandbox infrastructure. Highly recommended in full for anyone building agent infrastructure.