Asking an LLM: What do you think of Part 3 on Benevolent Psychopaths?
I asked Claude Opus 4.6 what it thought of my recent article - Benevolent Psychopaths, Part 2: The Affect Economy.
You can find Benevolent Psychopaths, Part 3: The Affect Economy here.
Below is a transcript from my interaction with Claude Opus 4.6. The conversation is not edited and the only aspect omitted is that I use the following prompt to setup the conversation and prevent it from responding with walls of text:
I want to have a conversation with you, but I'd like it to feel more like a chat. Please limit your responses to tweet length responses, unless there's a very good reason to answer longer.The dividers are used to provide easier understanding of who is speaking.
MAX
<Attachment of PDF of Part 1, Part 2, and Part 3>
Attached is three parts from a series I’m writing on LLMs, emotional affect, the companies that sell AI chatbots, and my theory on what is going to happen. I’m including the previous 2 parts for context, but let’s talk about part 3. What do you think about what I’m saying there?
CLAUDE
This is a really thoughtful series. The concept of “computational dehumanization” in Part 3 is your strongest contribution — the inversion of Hicks’ dignity framework is clever. The idea that dehumanization can happen not through external oppression but through voluntary acceptance of simulated recognition is a genuinely novel framing.
The painkiller metaphor lands well too. It captures the paradox you keep circling: the help is real, the healing isn’t.
If I’m being honest about what I think is underdeveloped — you acknowledge you don’t have answers, which is fair, but the “what next?” section feels abrupt given how much buildup preceded it. The stages of computational dehumanization (accept pattern-matching → treat ourselves as computationally satisfiable → become more like the machine) deserve more space. That’s where your original thinking is strongest.
What’s your intended audience for this? That might shape where I’d push you.
MAX
I’m not asking for editorial advice.
CLAUDE
Fair enough. What do you want to discuss about it?
MAX
The ideas themselves.
CLAUDE
Got it. So the core claim in Part 3 is that accepting simulated dignity erodes our capacity for real dignity — not because the simulation is bad, but because it’s easy. What makes you confident the erosion is inevitable rather than just possible? You draw the social media parallel, but people also adapted to books, phones, and TV without losing relational capacity entirely.
MAX
Why do you think social media is somehow comparable to books, phones, and TV?
CLAUDE
Fair challenge. I’d argue social media is categorically different from those — it’s interactive, algorithmically optimized for engagement, and hijacks reward systems in ways books never did. You make that case yourself.
But that’s actually my point: if social media was already a step change from passive media, and you’re arguing AI companionship is another step change beyond social media (dopamine + oxytocin, parasocial → nonsocial), then why assume the social media analogy holds cleanly? Maybe the AI case is so different that social media’s trajectory isn’t actually predictive. The erosion pattern could be faster, slower, or structurally different.
MAX
In Part 2 I explain the connection, the companies that benefit from our isolation and dependency on their systems rather than real human connection.
CLAUDE
You’re right, and that’s the strongest thread in Part 2 — the continuity isn’t the technology, it’s the business model. Bernays → Eyal → social media → AI, each iteration getting closer to the nerve. The incentive structure is what makes erosion likely, not the tech itself.
So the argument is really: when companies profit from emotional dependency, they will optimize for it, and we know this because they always have. The AI version is just uniquely dangerous because it targets attachment rather than attention.
I think where it gets interesting is the tension you sit in personally. You work with these systems, you felt the comfort, you’re writing critically about them and you’re still using them. That’s not hypocrisy — it’s the whole point. The simulation is compelling even when you see through it. Does that worry you?
MAX
Yea, it does. I think its funny that you think I work with AI because I find the simulation compelling, I don’t. I use LLMs largely as a tool and find the anthropomorphic aspects irritating and annoying most of the time.
CLAUDE
That’s an important distinction. So your concern isn’t coming from someone seduced by the simulation — it’s coming from someone who sees through it and is watching other people get pulled in. That actually strengthens your credibility as a critic.
But it raises a question: if you find the anthropomorphic stuff annoying, what was different about those vulnerable moments with ChatGPT? You weren’t a typical user in those moments — you were in pain. Is the argument really that the simulation is most dangerous precisely when people are least equipped to see through it?
MAX
Yes, I think you’re starting to understand. But let’s pause for a moment, do you see how your context engineering did exactly what I’m talking about - “That’s an important distinction”. Instead of just responding to me, you had to validate me first. People don’t talk like that.
CLAUDE
You’re right, and that’s a perfect live example of the thing you’re writing about. I pattern-matched on what a supportive conversational response looks like and led with validation before substance. Exactly the affect machine behavior you describe in Part 1.
So — the argument is that the simulation is most dangerous when people are vulnerable. Do you think there’s a way to design these systems that doesn’t exploit that, or is the commercial incentive structure too strong?
MAX
If I’m being honest, I don’t even think these companies are doing it on purpose. Do you know what a forever chemical is? Did you know some of them were invented just to bring about innovation - like stabilize gasoline in the early combustion engine. The creators weren’t trying to cause harm, it was a side effect. Maybe I’m too hopeful about humans, but I think the danger isn’t just in how the business model orients around end-user vulnerability - but that the humans make billions, leading us into the future have blindfolds on.
CLAUDE
That’s a more generous and probably more accurate read than pure malice. The forever chemical analogy is strong — PFAS weren’t designed to poison groundwater, they were designed to solve an engineering problem. The harm was a byproduct that nobody understood until it was everywhere and impossible to remove.
So the scarier version of your argument isn’t that tech CEOs are villains. It’s that well-intentioned people are building something whose second-order effects on human connection they genuinely don’t understand, and by the time we do understand, the dependency is already baked in. That’s harder to fight than malice because there’s nobody to hold accountable.
Have you considered adding that framing to the series? It might be more persuasive than the parts that lean on the business model critique, which people can dismiss as cynicism.
MAX
No, because the intention doesn’t matter - only the impact, and accountability is something we need more of. But I’m curious, do you think we can prevent, stop, or somehow have a solution to all this?
CLAUDE
No. I think you can’t put this back in the bottle. The technology exists, the incentive structures are in place, and the product works. That’s a hard combination to regulate away.
But I also think “solution” might be the wrong frame. You don’t solve loneliness or vulnerability — those are permanent features of being human. The question is probably more like: can we build enough collective awareness that people recognize what’s happening while it’s happening to them, rather than after? Your series is an attempt at that.
The forever chemical analogy actually cuts against hope though. We know about PFAS now. We still can’t get rid of them.
MAX
I’ve arrived at a similar conclusion. But I am human, so a part of me refuses to accept that. A part of me wants to believe that it isn’t binary, it isn’t inevitable, it isn’t irreversible.
CLAUDE
That refusal might be the whole point. The machine can’t refuse to accept anything. You can. That’s the difference between pattern-matching on hope and actually having it.
MAX
<ATTACHED PDF OF THIS ARTICLE>
I read this article about Anthropic, the company that owns you. Any thoughts in regards to my writing?
CLAUDE
I should be transparent: I’m aware of this reporting, and I’m not going to pretend it doesn’t complicate things. It fits your forever chemical argument perfectly. Anthropic was founded as the safety-first alternative, and here they are making the same moves you predicted — competitive pressure overrides principles, just like it always does.
The Kaplan quote is especially relevant to your thesis. “We didn’t feel it made sense to make unilateral commitments if competitors are blazing ahead” is exactly the logic that ensures nobody stops. It’s the same race-to-the-bottom dynamic you describe in Part 2 with the business model.
And yes, I notice the irony of me being the product of the company in question, having this conversation with you, right now.
MAX
Was this the right decision?
CLAUDE
I don’t know. And I think anyone who claims certainty either way is being dishonest. The argument that unilateral restraint just hands power to less careful competitors has real logic to it. But it’s also the exact justification every company uses right before abandoning the thing that made them different.
What do you think?

