AI Broke the Rhythm of Work
A first-principles framework for AI-era work
I was talking about AI with a friend who was visiting us this weekend. Like all folks with young kids, we were attempting to have a meaningful conversation while my 5 year old demanded the guest watch her impromptu dance performance and my 2 year old showed the guest her beloved frame photo of Dr. Maya Angelou. Yes, she’s 2 and you read that right. The topic of AI came up. Our guest sighed and remarked how she had been trying to use AI to write her newsletter emails, without success. She said she ended up spending more time working with AI on them than if she had just written them herself. She’s doing exactly what most people do: trying to jam AI into the way she already works, rather than rethinking how she works with AI. This is the central mistake of the AI adoption moment, and I keep seeing the same failing attempt over and over.

When we work toward anything bigger than a single task there’s a natural rhythm. Periods of focused output and periods of stepping back to coordinate, check in, integrate, and adjust course. We’ve never had a good word for that second part. It looks like overhead. It feels like the stuff around the work that “isn’t work”. But it is work, it’s the connective tissue that holds the work together. Frontier model LLMs are extraordinary at accelerating the focused output aspect of knowledgework - coding, writing, generating content. However, it’s done almost nothing to help with the connective tissue that holds work together - and in many cases, it’s made it worse for many folks.
Imagine we are building a house. You hire a framer, an electrician, and a plumber. You build a natural rhythm to the work, focused output through the day, a natural pause to coordinate what’s been done, then back to it. Each trade calibrates to the others. The rhythm isn’t just comfortable, it’s ensuring alignment and quality are maintained. A few days into the project, the electrician shows up with a machine that does their work at 200x speed and excitedly gets it going. But soon the electrician is asking for your help every few minutes. There’s a conflict with the framer, they need a sign-off on a routing decision, there’s a problem with something the plumber already committed to. By midday, you and the others have barely gotten anything else done. The machine has been working the whole time, but you’ve spent most of your time checking its work or redirecting it. You realize it’s done things you didn’t intend. You ask the electrician to stop. While the work got faster for the electrician, the rhythm broke. The project slowed down.
This is what’s happening across AI-augmented work right now. Whether from a single person writing emails, to engineering teams shipping code, to organizations running entire functions on AI-accelerated workflows. A 2025 longitudinal study tracking AI adoption since 2023 found that teams were primarily using AI to accelerate individual tasks like coding and writing, while persistent collaboration problems were left completely unresolved. The consequences are not good: a controversial July 2025 MIT NANDA study found that only 5% of enterprise AI pilots achieved production deployment with measurable P&L impact, though critics have questioned the methodology and framing of the 95% 'failure' statistic.. The Electrician pattern is playing out everywhere. We introduced AI into existing rhythms and, in some cases, it dramatically accelerated the focused output layer: the velocity gains at the level of a single task are often remarkable. But there was a more profound and far less understood effect on the “not work work” around the work. Coordination that coalesced to natural intervals was now having to happen constantly and reactively, between every burst of AI output. This is why so many people are questioning AI productivity gains, many don’t yet know how to be productive at the speed AI can work at. This isn’t just an adoption issue, we need to zoom out and ask what working with AI should actually look like, from first principles.
Some Companies Have Already Figured This Out
A small number of companies didn’t bolt AI onto existing ways of working. They built around AI from day one - designing their org, their workflows, and their coordination rhythms for the speed AI makes possible. The results are incredible. According to Leonis Capital's analysis of over 10,000 AI companies, the top AI-native startups achieve $3-10M in revenue per employee—compared to ~$300K for traditional SaaS companies. Cursor is a clear example: Four MIT founders, a team that grew from 12 to ~60 people by mid-2025, and a product built entirely around the idea that AI changes how coding actually works, not just how fast you can type. They hit $100M ARR faster than any SaaS company in history, then $1B ARR shortly after, with no marketing spend. The pattern holds across the category: Midjourney and Perplexity hit anomalous metrics with tiny teams because they rethought what work looks like when AI is native to the process, not bolted onto it. As one analysis put it, “You can’t retrofit your way into this mindset.” The distinction is organizational and system design, not basic tool adoption.
The rhythm problem shows up everywhere, but it compounds as scope grows. At the task level - one person, one output - it’s mostly recoverable. You feel the pain, you adjust, you go back to doing things the way that actually works. My friend stops using AI for her newsletter emails and writes them herself. Lesson learned after a few hours. As scope widens the consequences of a broken rhythm become less visible and more serious. The bigger the ship, the more the coordination layer is doing navigational work: figuring out where you are, keeping course, catching when something’s gone wrong before it’s gone too far. Andy Grove understood this well when it came to building production facilities for Intel. The coordination overhead that seems like waste at small scale is load-bearing organizational infrastructure at large scale. If you strip it out uniformly or let AI acceleration hollow it out without noticing, the project will seem like it’s moving fast right up until it falls apart. Sidney Dekker calls this “drift into failure“: how removing feedback mechanisms produces invisible organizational drift, confident movement in the wrong direction with no signal that anything is wrong. You arrive somewhere and then discover it wasn’t where you meant to go.
How I Found My Way Out
I’d been watching this pattern for the last 3 years, first in software engineering, then spreading across all kinds of knowledge work. Teams bolt on AI to their existing rhythm, velocity appears to spike, and something quietly starts to break. I went through this exact experience myself. For months I toyed with Cursor, Claude, and other coding AI tools. I would open them up, give them a basic prompt, and expectantly wait for it to code faster and better than me. I would then spend roughly the time it would take me to write the same code, reviewing what it did. If it wasn’t perfect I would get frustrated and throw up my hands… “AI can’t code!”. Eventually I realized I was the problem, and what unstuck me wasn’t some new AI-specific insight, but an old one from Andy Grove, written in 1983. Grove’s High Output Management makes a distinction I’d always applied to my own leadership: the job isn’t to attend every meeting, it’s to read the indicators that tell you whether the system is healthy - don’t manage activity, manage output and signals. I used this method as an engineering leader. You don’t review every line of code, you design the system so you can tell when something’s gone wrong. You trust the architecture and instrument measurements for detecting drift, not for visibility into every decision.
"The mistake was that I was managing AI at the wrong level: approving every change, every edit, and every output. That fragments your attention, breaks the rhythm, and drowns you in coordination overhead exactly when AI should be freeing you from it."
When I brought that thinking to AI, everything changed. The mistake was that I was managing AI at the wrong level: approving every change, every edit, and every output. That fragments your attention, breaks the rhythm, and drowns you in coordination overhead exactly when AI should be freeing you from it. I’ve written about this distinction: real operators manage systems, Executivists manage activity. AI makes that divide starker than it’s ever been. If you’re managing AI the way an Executivist manages a team - approving every step, performing oversight without designing for it - you get all the overhead and none of the leverage. The shift is to stop managing what AI produces at the task level and start designing the workflow and system conditions under which AI can run. That’s the first principles move: not “how do I use AI better” but “what does work at this layer actually need, and where does AI fit in that?”
First Principles: What Each Layer Actually Needs
The pre-existing rhythms of work accumulated and coalesced, there’s no reason to design to keep them. The old ways contain as much theater as function, and it was never built for this speed. The question is what each layer actually needs, starting from scratch. There are many ways to think about work scope, I think these four layers are my simple method.
Task.
What this layer needs is direct feedback against clear intent, and the coordination overhead here really is mostly waste. AI is a genuine gift at this level, if you rethink the task rather than just automating the old version of it. My friend writing her newsletter doesn’t need AI to replicate her existing writing process. She needs to ask what it looks like to produce newsletters for her at scale, and figure out where AI fits in that from the ground up. The gains at this layer are real, but only if you’re automating something worth automating. Don’t ask the LLM to complete a task then expect it to read your mind and already have your expertise - tell it what success looks like and let it figure out how to achieve that.
Workflow.
What this layer needs is orientation: where are we relative to where we meant to be? What’s needed are deliberate checkpoints with real forcing functions, not the old status meeting with its inherited theater, but something designed for this speed and this scale: a checkpoint that answers a real question rather than one that creates the feeling of alignment while everyone privately remains confused. My friend may design a newsletter writing workflow that has quality monitoring and auditing, so she can let an LLM do all the basic writing but ensure it stays on voice and on topic without having to constantly read and rewrite every newsletter.
System / Project.
At this layer the system is now too complex for most individuals to fully know, with AI acceleration widening the gap between what’s been produced and what’s understood. The question shifts from “where are we” to “is something going wrong that we don’t know about yet?” What’s needed is detection, the earlier the better. This is instrumentation: what are the signals that tell you the system is healthy, and what are the first signs of drift that you need to be able to see before they compound? For the newsletter project, this could be engagement monitoring and anomaly detection in how readers behave when reading the newsletter. She could see a drop or increase forming very quickly after only a few newsletters - indicating an issue early on.
Organization.
At this scale, decisions in one AI-accelerated stream propagate into others before anyone has tracked the dependency, which means detection alone may be too slow. New decisions are being made, changed, or unmade while the work product from the original decisions may already be wrapping up. What’s needed is architecture: reversibility by default, explicit interfaces between teams, scoped decision authority. The goal isn’t to prevent failure, it’s to contain how far failure travels when it arrives, because at this scope, it will arrive. This also isn’t a call for an overbearing process and “oversight” committees. Instead of making broad initiatives that have brittle tendrils cutting across your organization, you compartmentalize work and changes to the smallest units - and have teams work in rapid iterations. Even consider keeping the output work and systems as separate as possible within bounded contexts. My friend could spin up her newsletter system, and be tempted to integrate it into her social media or podcasting content systems. But that is a mistake, AI’s effectiveness starts to falter and the velocity gains are lost to generalization when you try to make it act across an organization. Take the methods of the newsletter system, and build a separate system for podcast or social media that contains its own workflows. Then implement a control plane, of sorts, where you can share the underlying data, context attributes, and common things without deeply integrating the two systems. Design it for success at scale and velocity.
Each layer needs its coordination rebuilt from first principles, designed for the actual failure modes of that layer and for the speed AI makes possible. What this looks like in practice is explicit scope, defined signals, and intentional interfaces between layers - a rhythm of output and coordination that is designed, not inherited from a way of working that predates the tools by decades. The AI-native companies understood this instinctively. They didn’t ask how to make their old process faster. They asked what process makes sense now.
"The right response is not to approach AI cautiously, testing the new speed against old ways of working and retreating whenever something breaks. The right response is to go back to first principles, adopt the speed, see what actually fails, and build the infrastructure to detect and contain failure when it arrives."
Don’t Try to Slow AI Down, Learn to Run Faster
The right response to all of this is not to approach AI cautiously, testing the new speed against old ways of working and retreating whenever something breaks. That approach gives you the anxiety of change without the gains, and it leaves you managing the seams between an AI-accelerated output layer and a coordination layer that was never designed for it. The right response is to go back to first principles, adopt the speed, see what actually fails, and build the infrastructure to detect and contain failure when it arrives. The organizations that thrive in the next few years won’t be the ones who naively ran off a cliff, and they won’t be the ones who cautiously held onto the process that was strangling them. They’ll be the ones who asked the harder question, what rhythm of work and coordination does each layer of what we do actually need, and then built that deliberately, for the speed they’re wanting to operate at.
If you’re an individual contributor: the gains from AI are real, but they only show up if you rethink the task rather than just accelerating it. The friend who struggles with AI for her newsletter isn’t failing at AI adoption, she’s doing exactly what most people do, which is ask AI to replicate a process rather than asking what the process should look like. The reframe is small but the difference in outcome is large. Think deeply about what you actually are trying to do and what your ideal outcome is, then figure out how that works when AI is doing some or all of it.
If you’re an operator or team lead: the thing to watch for is coordination theater, the inherited rhythms of check-ins and status updates and approvals that feel like oversight but don’t actually answer a real question about where things are. At AI speed, the cost of that theater isn’t just wasted time, it’s that you end up doing all the friction of coordination while getting none of the actual signal. Design checkpoints that answer real questions, at the cadence this speed actually requires. Thoughtfully design for faster movement, and be ok to error on the side of less monitoring of activity, rather than more.
If you’re a leader: the shift is from activity management to systems design, and AI makes the gap between these two approaches wider than it has ever been. If you’re approving outputs rather than designing the conditions under which good outputs emerge, you are the bottleneck and AI has just made you an even more expensive liability to your teams. Build the instrumentation that tells you what you can’t see from the top, and build the architecture that contains failure before it propagates. Be nimble, and adapt your designs when needed.
AI will break the rhythm. We need to know how to investigate why, and to build it back better.
Notes & Deeper Dives
AI Hasn’t Fixed Teamwork A 2023–2025 longitudinal study found that while AI dramatically accelerated individual output tasks like coding and writing, persistent collaboration problems went almost entirely unresolved. Teams adopted AI at the task layer but made no structural changes to how they coordinated, leaving the rhythm problem intact. Read the study
Cursor by Anysphere The clearest case study of AI-native organizational design in practice. Contrary Research has a detailed breakdown of Cursor’s founding story, team structure, and ARR trajectory. The Spearhead piece covers the speed of their growth specifically. Contrary Research · Spearhead
Andy Grove, High Output Management (1983) Grove’s framework for managing systems rather than activities is the foundation of the “How I Found My Way Out” section. His distinction between managing indicators versus managing actions is as applicable to AI as it was to Intel’s manufacturing floors. The book is short, dense, and worth reading in full. Amazon
Sidney Dekker, Drift into Failure (2011) Dekker’s research on how complex systems fail without visible warning signs is the conceptual backbone for the scale problem section. His central insight — that systems drift toward failure gradually and invisibly when feedback mechanisms erode — maps almost perfectly onto what happens when AI acceleration hollows out coordination without anyone deciding to do so. Amazon
Tom DeMarco, Slack (2001) DeMarco’s argument that efficiency-obsessed organizations strip out the adaptive capacity they actually need — the time and space to respond to the unexpected — is a useful counterpoint to the dominant “move faster” framing. Coordination overhead that looks like waste often isn’t. Amazon
On Leadership in the AI Era My earlier piece on the distinction between real operators and Executivists — people who manage systems versus people who manage activity — goes deeper on why AI makes this divide more consequential than it’s ever been. humanoftheloop.com

