AI, Eng Orgs, and the Agile Manifesto
Why You Need to Redesign Your Engineering Organization for AI
Seventeen software developers met at a ski lodge in Utah, united by a shared frustration. Across the industry, the way software was being built had become buried in process: months of upfront planning, walls of documentation, layers of approval before a single line of code shipped. Every one of them had independently found lighter, faster ways to work, and they’d each seen the results. They got together to find the common thread, and what they landed on was a set of principles that reduced the distance between doing the work and knowing whether it was right. Shorter feedback loops, less navigational lag, and a more direct connection to the outcomes of their work. You could imagine this is a story from today, in 2026, some cutting-edge AI-enabled SWEs, impatient with the pace of work, seeing their new speed and capabilities wasted by corporate theater. But it’s not, it’s from 2001 - It was the weekend of the creation of the Agile Manifesto.
The manifesto authors weren’t saying “we can build faster, stop slowing us down.” They were saying “we can’t predict the future, so stop pretending we can and build a process that embraces change.”
Then AI opened up a whole new multiplier on speed, productivity and time-to-production for software development. What took days or weeks previously can be coded and deployed in hours - and code itself has become somewhat of a commodity. When a single engineer can build a service in two hours, a week of unchecked drift amounts to much more than a minor course correction.
I wrote recently about how AI broke the rhythm of work - how it accelerated the output layer of knowledge work while doing nothing for the coordination layer that holds it together. This article is about what to do about it in your Engineering org. We’re not writing a new manifesto, we’re redesigning the org itself for the speed we’re actually moving at.

The Principles Were Right
We need to go back and read the twelve principles behind the Agile Manifesto, not the ceremonies we built on top of them or derivative attitudes we formed because of those ceremonies. This isn’t about the SAFe frameworks, Jira workflows, or sprint velocity dashboards. It’s about the actual principles, they are almost eerie in how well they describe what AI-era engineering needs. Below are 6 of the 12 I think worth highlighting the most.
“Working software is the primary measure of progress.” This is the single most violated principle in modern engineering. We replaced it with story points, velocity charts, burndown graphs, and PR counts. These are all activity metrics, none of them tell you whether the software works, whether it solves the problem, or whether it’s moving you closer to the goal. The 2025 DORA report confirms the consequences: AI coding assistants produced 98% more pull requests merged but organizational delivery metrics stayed flat. More code, more activity, all for the same output.
“Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.” Environment, support, and trust. Most orgs responded to AI by doing the opposite: adding review layers, requiring approval on AI-generated code, and monitoring activity more tightly. They’re managing AI the way I described in the last article: approving every change, every edit, and every output.
“Deliver working software frequently, with a preference to the shorter timescale.” AI can deliver in minutes or hours. Sprint ceremonies gate it to weeks and bloated SDLCs can push it to months. The two-week sprint was always a proxy for “as fast as we can coordinate.” AI changed what “as fast as we can” means.
“Simplicity — the art of maximizing the amount of work not done — is essential.” AI tempts you to generate more: More code, more features, more PRs. But the principle doesn’t say “do more work faster.” It says maximize the work you don’t do. The most powerful use of AI isn’t generating more code, it’s making it so you don’t need to write certain things at all. Design the system so the work isn’t necessary -that’s simplicity.
“The best architectures, requirements, and designs emerge from self-organizing teams.” Self-organizing is not centrally coordinated, not managed through ceremony, not aligned through weekly status reports. Self-organizing requires clear boundaries, clear ownership, and enough autonomy that a team can actually make decisions without waiting for a sync. Most org structures actively prevent this.
“At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.” The retro, the one ceremony specifically designed to catch when the process itself is broken. And it’s the one most teams treat as the most performative. The principle doesn’t say “hold a meeting where people share feelings.” It says reflect, tune, and adjust. It’s a feedback loop on the system itself.
Going back to these basic ideas, and interpreting them into our orgs for the current day is one of the most critical aspects to designing current teams.
Org Design Is the Unsolved Problem
By design, the manifesto describes how teams should work. It doesn’t describe how organizations should be structured. The principles assume a team. They don’t tell you how to draw the boundaries between teams, how to size them, or how information flows across an organization.That’s org design. The manifesto authors were working in an era where a team of 5-8 people might ship a feature every few weeks. The coordination problem was mostly within the team. Today, AI-enabled engineers can ship features in hours, which means the coordination problem has moved between teams. The inter-team interfaces, the handoff points, the dependencies, that’s where the rhythm breaks now. Team Topologies offers the best structural thinking I’ve seen on this. Matthew Skelton and Manuel Pais argue that you should design team boundaries around cognitive load - how much complexity a team can hold in their heads and still be effective. Their second edition, released in September 2025, elevates this from a consideration to the foundational organizing principle. When AI changes what a team can produce, it also changes the cognitive load on everyone downstream. More output means more to review, more to integrate, more to coordinate. Unless you redesign the boundaries.
The practical implications are significant. Teams should be smaller than you think, the communication overhead math is unforgiving. A 150-person org has 11,175 potential communication channels. An AI-enabled 30-person org producing equivalent output has 435. That’s a 96% reduction in coordination tax. This is why AI-native startups like Cursor hit $100M ARR with ~60 people. The advantage isn’t just the AI tooling, it’s the organizational structure that AI makes possible. Teams need explicit interfaces, not just “we’ll figure it out in the weekly sync”, but defined contracts between teams about what they provide, what they consume, and what the handoff looks like. Conway’s Law tells us that the software will mirror the communication structure. If the communication structure between teams is ad hoc syncs and Slack threads, the software architecture will be ad hoc too.
What to Actually Do
I don’t think this requires a twelve-month transformation plan…please god, no more transformation plans. The biggest gains will come from a small number of moves, done deliberately.
First, start measuring output.
This is the core principle that the manifesto got right 25 years ago, and will probably always be true: Working software is the primary measure of progress. Not story points. Not velocity, and not PRs merged. Does the thing work? Is it solving the problem? Map your metrics to that standard and kill the ones that don’t pass. If you find yourself trying to track months-long initiatives with other indicators because there’s no working code getting delivered to customers, your delivery cycle is broken, not your measurements. This alone will clarify your thinking about what meetings and rituals you actually need.
Second, audit your coordination theater.
Most leaders I talk to know their coordination rhythms aren’t working, but they can’t articulate why. Andy Grove offers a useful lens in High Output Management: a genuinely effective indicator covers the output of the work unit, not the activity involved. You measure a salesperson by orders, not calls. Take your calendar and apply this to every recurring meeting and ceremony. Is it tracking output - whether the software works, whether the system is healthy? Or activity - what people are working on, what percentage of the sprint is complete? Cut the theater. But be careful: some of what looks like waste is load-bearing. Before you cut a meeting, ask: if this disappeared, how would I find out about the problems it catches? If the answer is “I wouldn’t,” don’t cut it - redesign it into an actual indicator. If the answer is “someone would Slack me three days later,” cut it.
Third, redraw your team boundaries around cognitive load, not headcount.
If AI is enabling your engineers to produce 5x the output, the team downstream that integrates and reviews that output is drowning. The answer isn’t more reviewers - it’s smaller, more autonomous teams with clear interfaces. Each team should own enough to make decisions independently and deliver their work as far along the cycle as possible - and when necessary have clear contracts between teams. What does that team provide? What does it consume? Reduce handoffs in favor of single-thread ownership. These shouldn’t be informal understandings - they should be defined, versioned, and visible - but lightweight, we don’t want to make an overbearing process. Team autonomy and interfacing creates architecture modularity and networking - poorly designed teams make poorly designed architecture. Conway’s Law is still in effect after you’ve adopted AI.
Fourth, invest in a platform layer.
Standardize environments, deployment pipelines, guardrails, and shared tooling - but don’t unify them unnecessarily. Standardizing doesn’t mean forcing projects or teams to follow an approach that doesn’t fit their work or needs. When standardization and usefulness are balanced, this is what lets teams - and their AI agents - move fast without constant coordination. It’s the infrastructure that enables autonomy. Without any standardization, every team reinvents the wheel, every AI integration is bespoke, and the coordination tax you cut from meetings comes back as integration overhead. The 2025 DORA report found a direct correlation between high-quality internal platforms and an organization’s ability to unlock AI value.
Fifth, do the retro.
Not the performative one, a real one. The principle says: at regular intervals, reflect on how to become more effective, then tune and adjust. That means your coordination rhythms, your team boundaries, and your meeting cadences should be evolving continuously. The people will be comfortable with change when change is the norm, and they see it as meaningfully improving how they work. When was the last time a retro actually changed how your team operates? If the answer is “never” or “I can’t remember,” the retro isn’t a feedback loop - it’s a useless ceremony. The organizations that figure this out won’t be the ones that nail the perfect design on day one. They’ll be the ones that build the habit of honestly assessing what’s working and changing what isn’t.
Now you just need a plan
In the next piece, linked below, I’ll lay out a framework for deciding how deep AI should go in your org, how to diagnose where you are right now, and how to design for where you want to be.

