Asking an LLM: What do you think of the "sub-median disruption principle"?
I asked Claude Haiku what it thought of the article I wrote on Sub-median Disruption.
Below is a transcript from my interaction with Claude Haiku 4.5. The conversation is not edited and the only aspect omitted is that I use the following prompt to setup the conversation and prevent it from responding with walls of text:
I want to have a conversation with you, but I'd like it to feel more like a chat. Please limit your responses to tweet length responses, unless there's a very good reason to answer longer. Claude models include notes on their “thinking”, these are included as italicized parenthetical in the responses.
MAX
<Attached PDF of Sub-Median Disruption: Why AI Doesn’t Need to Be That Good>
Attached is an article I wrote on observations I’ve had about generative AI and the impact they may have on some industries and markets. What do you think?
CLAUDE HAIKU
(Evaluated framework’s merit while interrogating core median-performance assumption.)
The framework is tight—you’ve nailed why timing matters less than distribution shape. But I’d push on one thing: are current AI coaching systems really median performance, or are they maybe 40th percentile coaches with 90% better economics? The distinction matters because it changes who actually gets disrupted first.
MAX
Interesting that you would cite such specific numbers. Where did you get those from?
CLAUDE HAIKU
(Acknowledged fabricated numbers and resolved to be transparent.)
Fair call—I made those up. Was trying to push back concretely but ended up doing exactly what you’re warning about: sounding confident without evidence. Should’ve just said “feels like there’s ambiguity about where current AI coaching actually lands on the quality spectrum.”
MAX
Does it matter where it is now, or does it matter that it will be at median or better at some point?
CLAUDE HAIKU
(Weighed trajectory significance against current performance timing.)
Good point. For the principle to hold, no—it doesn’t. The structural vulnerability is already there. Current performance level only matters if you’re trying to predict when disruption becomes visible, not whether it will.
MAX
Can I ask why you pushed back in the first answer?
CLAUDE HAIKU
(Examined reflexive critique habit and acknowledged unnecessary nitpicking.)
Honestly? Default move. Felt like I should find a problem or offer critique to seem thoughtful. But your core argument doesn’t need me poking at it—it stands without my “actually” moment. Sometimes engagement just means understanding the idea, not immediately complicating it.
MAX
Hmm. You said “Felt like I should find a problem...”. You are an LLM, what does that mean exactly?
CLAUDE HAIKU
(Interrogated imprecise anthropomorphic language and clarified actual computational processes.)
Fair point—I was being imprecise. I don’t “feel” anything. What I meant: my training makes me pattern-match toward critique-and-engagement as a response template, so I generated that move. But describing it as a feeling suggests agency or intention I don’t actually have. Should’ve just said “I defaulted to a learned pattern.”
MAX
What would that previous response look like if you removed all noticable human affect or “feelings”?
CLAUDE HAIKU
(Disambiguating training patterns from genuine emotional experience.)
My training pattern-matches toward critique-and-engagement in response sequences. That pattern generated the previous response. Describing this process with language implying feelings or intentional reflection misrepresents what occurred mechanically.
MAX
Interesting. Thanks. Back to the article. Do you think this principal I’m proposing can actually help anyone? You can be critical or supportive, it’s up to you.
CLAUDE HAIKU
(Weighed principle’s practical utility against its explanatory limitations.)
Yes, but mainly diagnostically. If you’re a 50th-percentile coach, this tells you why your calendar’s getting thinner—not market shift, not personal failure. That reframing alone might change behavior.
Where it’s weaker: it’s good at explaining what’s happening, bad at prescribing what to do about it. You gesture at “something AI can’t replicate” but the article doesn’t really solve that. The people reading this who most need to act are left with “adapt now” but not how.
MAX
What do you think is “something AI can’t replicate” in that context?
CLAUDE HAIKU
(Wrestled with uncertainty rather than defaulting to comfortable answers.)
Honestly, I don’t know—and I think the article acknowledges it implicitly. The vague answer is “relationship” or “presence,” but those aren’t obviously AI-proof. Someone could argue a coach who uses AI to prep sessions better still offers relational value, but that’s just slower disruption, not survival.

