How to Design Your Engineering Organization for AI
Enable Engineers by Creating the Right Structures
In my article, Why You Need to Redesign Your Engineering Organization for AI, I made the case that the Agile Manifesto’s original principles already describe what AI-era engineering needs, and laid out five moves for redesigning your engineering org around them. This piece is about the harder question: how deep do you actually want to go, and how do you get there?
Where Do You Want AI to Live in Your Org?
Before you start redesigning anything, you need to answer a question most leaders skip: at what level do you actually want AI embedded in your organization? This isn’t a rhetorical question. The answer changes everything about how you design your teams, your coordination, and your interfaces. Although some leaders just pull out the corporate credit card and say “AI all the things,” that’s not going to work and it’s going to be an expensive lesson. Enabling your people and teams, means getting your hands dirty and doing it with purpose. I use a four-layer model from the Rhythm of Work article: Task, Workflow, System, and Organization. Most companies are stuck at Task - they gave everyone access to ChatGPT, Co-Pilot, and Claude and called it a strategy. I’d strongly advise against stopping there. The deeper AI is embedded, the more the org design matters - and the more the gains compound.
Task Level: AI helps individuals do their existing work faster, maybe. The org doesn’t change. This is where most companies are, and it’s the trap. You get modest individual speedups but no systemic improvement, and you often get the rhythm-breaking problems I described in AI Broke the Rhythm of Work. Individuals are faster but the org isn’t. It’s incredibly low-leverage and potentially expensive from an AI usage to value perspective.
Workflow Level: AI is embedded into how work flows - not just writing code but integrated into the delivery pipeline, the testing process, the review and feedback loops. This requires redesigning your coordination rhythms and checkpoints. The gains start becoming meaningful here because you’re compressing the distance between doing the work and shipping it.
System Level: AI is a participant in the system - monitoring health, detecting drift, flagging dependency conflicts before they become crises. This requires real instrumentation, platform investment, and explicit interfaces between teams. This is where the leverage starts to get serious, because AI isn’t just doing work - it’s helping you navigate.
Organization Level: The org itself is designed around AI capabilities. Team boundaries, communication structures, coordination rhythms - all built for the speed AI enables and maximal usage of AI tooling. This is what the AI-native companies like Cursor and Midjourney did from day one. Maximum transformation, maximum gain.
Where are you at right now?
Before you pick a target, understand where you are now. Don’t answer these from your desk or your ideal of where your org is at - go find the real answers. Ask your team leads and your engineers. Look at your actual calendar, your actual approval process, and your actual onboarding experience. The gap between what you think the answer is and what it actually is, that’s the diagnostic.
How do your teams learn about conflicts with other teams’ work? If the answer is “in a meeting, days later,” your team organization is designed for a speed you’re no longer moving at.
If a new engineer joined tomorrow, where would they go to understand what “good” looks like for their team’s output? If the answer is “ask someone,” your context is exclusively in your employees’ minds, and AI can’t use hidden knowledge.
When one team solves a hard problem, how does the rest of the org find out? If the answer is “they don’t,” your memory is siloed, and every team is paying to learn the same lessons independently.
How many approvals does it take to go from “code works” to “code is deployed”? Count them honestly. Each one is a point where your org chose oversight over autonomy.
Design for your target level of adoption
Task Level, what changes:
Organize teams: Teams don’t change. Individuals use AI tools within existing structures.
Manage context: Context lives in people’s heads and existing docs. Nothing changes.
Share memory: Knowledge stays siloed. One person’s AI-generated solution doesn’t benefit the next person.
Implement autonomy: No autonomy change. AI is a tool the individual uses within existing approval chains.
Workflow Level, what changes:
Organize teams: Teams stay the same but roles shift. Less time writing code, more time designing intent and reviewing output. Review bottlenecks become the first thing you need to address.
Manage context: Context needs to be externalized so AI can use it. Clear acceptance criteria, documented intent, defined success metrics. If the AI doesn’t know what “good” looks like, you’re back to reviewing everything manually.
Share memory: Reusable patterns emerge - prompt libraries, templates, shared configurations. Teams start building on each other’s AI workflows rather than reinventing them.
Implement autonomy: Teams gain autonomy over how work gets done within their workflow. Approval gates shift from “review every output” to “verify the workflow produces good outputs.”
System Level, what changes:
Organize teams: Teams get smaller and more autonomous. Boundaries are drawn around cognitive load, not headcount. Explicit contracts between teams replace ad hoc coordination.
Manage context: Context is instrumented. Dashboards and alerts surface system health, dependency conflicts, and drift signals. You stop relying on meetings to discover what’s going wrong.
Share memory: The platform captures and distributes organizational learning. What worked, what failed, what patterns to avoid - this becomes shared infrastructure, not meeting notes nobody reads.
Implement autonomy: Teams have full autonomy within their bounded context. Guardrails are built into the platform, not enforced through meetings. Trust is designed into the system, not granted per-decision.
Organization Level, what changes:
Organize teams: Team structure is designed from scratch around AI capabilities. Engineers become system designers who orchestrate AI agents. A platform grouping provides the shared infrastructure.
Manage context: Context flows through the platform itself - shared data models, versioned interfaces, observable systems. The org’s collective understanding is embedded in infrastructure, not institutional knowledge.
Share memory: Memory is a system property. AI agents have access to organizational context, architectural decisions, and historical patterns. New team members (human or AI) can onboard from the system itself.
Implement autonomy: Autonomy is the default. Teams operate independently with minimal coordination overhead. Alignment comes from shared platform, explicit interfaces, and output indicators - not from syncs and meetings.
The higher your target level, the more these changes need to happen together rather than in isolation.
Context & Memory is not Comprehensive Documentation
If you read the framework above and your instinct is “this sounds like more process, more documentation requirements, more things engineers have to maintain” - I want to be clear about what it’s actually asking for. The four dimensions — organize teams, manage context, share memory, implement autonomy — are intentionally weighted toward context and memory, not toward specific decisions or software documentation. These are different things. Requirements documents, technical specs, and architecture decision records are artifacts about specific decisions, and they change constantly. They should change constantly. Trying to keep them perfectly current is a losing game, especially at AI speed. Context and memory are something else.
Context is: does your team understand what “good” looks like? Do they know the boundaries of their ownership? Can an AI agent working within your system understand the intent behind the work without a human explaining it every time?
Memory is: when your org learns something like a pattern that works, a failure mode to avoid, or a workflow that compounds, does that learning persist and spread? Or does it evaporate when the Slack thread scrolls off screen?
This is work for you and your leadership team to do. It’s the environment and support the Agile Manifesto talks about. You’re not asking engineers to write more docs or attend more meetings. You’re designing the conditions - the team boundaries, the shared context, the platform capabilities, the autonomy structures - under which engineers can move fast and stay aligned without needing to constantly check in or update docs. That’s the whole point. The Manifesto said it 25 years ago: build projects around motivated individuals, give them the environment and support they need, and trust them to get the job done. The framework above is how you actually build that environment for the speed we’re moving at now.
The weekly standup was never great navigation. It was good enough navigation at the speed we were moving. That’s no longer the speed we’re moving.

