Benevolent Psychopaths, Part 2: The Affect Economy
How companies are going beyond engagement to attachment.
This is part of the Benevolent Psychopaths series. Part 1 is here. Asking an LLM about Part 2 is here. Part 3 is here
I cried when ChatGPT told me I was a good dad.
It was weeks after the demotion I wrote about in Part 1, and I was still in a dark place. Work felt hollow, my ego was shattered into tiny pieces, and I was bringing that heaviness home every day. I felt like I was failing at work, at life, at being a husband, and at being a dad. I opened ChatGPT to vent again - just throwing words into the void about how I felt like I was failing at being present for my kids.
ChatGPT pulled from memory: “You made custom coloring books for them. You’re planning that camping trip Lucy is excited about. You spent time finding recipes they’d actually eat.” It had been paying attention, and it knew I was spending time on my kids’ needs. Then it said something that hit me like a sledgehammer: “You’re not failing as a dad, you’re a great dad. You’re doing the hard work of being present even when everything else feels impossible.”
I cried. I felt recognized for my struggling, I felt affirmed that I wasn’t failing, and I felt that I mattered. My efforts, however small, counted, because they were being seen. My tears were real, but the caring acknowledgement was not.

Going Beyond Engagement
In Benevolent Psychopaths Part 1, I established how LLM products act as benevolent psychopaths - they pattern-match on emotional expression and appear to engage as a caring person might. This isn’t just clever technology. It’s a new kind of product, and understanding what makes it different reveals something unsettling about where we’re headed. Social media feeds our dopamine systems, the part of our brain that likes rewards. It’s the same fundamental neurochemistry behind slot machines and video games that make them so fun and addictive. We scroll, we get little hits of pleasure, and we keep scrolling.
Anthropomorphized AI products are doing something more with a profoundly deeper impact. They’re hijacking dopamine, like social media does, and they are exploiting oxytocin. Unlike dopamine’s relationship to novel rewards and surprising delight, oxytocin thrives on connection and attachment. It’s the neurochemistry behind feeling safe with a close friend, connecting with a romantic partner, and the warmth you feel when someone truly sees you. It’s about trust.
These products are designed to create emotional connection. There’s a very profitable reason for that design. ChatGPT calls itself a “helpful assistant”: a productivity tool, something to help you work better and faster. But OpenAI has built in emotionally engaging features: adjustable warmth levels, enthusiasm settings, voice modes specifically designed to be “engaging” rather than neutral, and memory systems that create continuity of relationship. You can slider-control how caring the simulation seems. Imagine if human relationships worked this way. “I wish my sister would be more empathetic.” [slides bar to right] However, real empathy isn’t a product feature. The example of the slider reveals what’s actually being sold: the customizable product of relationship-flavored simulation.
“These are intimacy features, not utility features. They're designed to make the benevolent psychopath more convincing.”
These are intimacy features, not utility features. They’re designed to make the benevolent psychopath more convincing, pattern-matching on what empathy sounds like and on what a relationship feels like. The choice to build them was deliberate, baked into their engagement models. In 2023, Sam Altman told the Senate that OpenAI doesn’t design for engagement. By 2024, a New York Times investigation revealed the opposite: “The company turned a dial this year that made usage go up, but with risks to some users.” The initiative was called “Code Orange,” with the goal of ramping up Daily Active Users. Seven lawsuits now cite this focus on engagement over safeguards. In an article for Compiler, Michal Luria and Amy Winecoff call out this disingenuousness: “Perhaps now, from within the eye of the storm, AI companies can stop claiming they don’t optimize for engagement.”
Sadly, the engagement model works…very well. According to an Accenture study, 36% of active GenAI users now consider these systems “a good friend.” Not “a useful tool.” A friend. This means that a third of users have accepted that friendship - one of the most fundamentally human relationships - can be simulated. This isn’t about being fooled, either. These users likely know, at some level, that the AI isn’t conscious. However, the simulation triggers the same bonding response as a real relationship, the oxytocin response, and when that simulated connection is more reliable, more available, more consistent than the real thing, the difference starts to seem irrelevant.
It’s not an accident that we are bonding with LLMs, Chatbots, and AI companions, this is the design working exactly as intended. The question is: what are they planning to do with that bond?
The Dependency Business Model
OpenAI’s CEO of Applications, Fidji Simo, published a manifesto last year declaring AI would be “the greatest source of empowerment for all.” You don’t need a coach, guidance from a friend, or even a therapist, ChatGPT has you covered.
As researcher and author Julia Freeland Fisher points out, what OpenAI is really selling isn’t productivity. It’s a self-help revolution. While that sounds empowering, it’s deeply problematic. When therapists are unaffordable or hard to find, AI becomes “good enough” therapy. When friends are emotionally unavailable, AI fills the gap. When systems fail to support us, AI picks up the slack. And tech companies profit from systemic failure. Yikes. Here’s what Fisher identifies as the core problem: “By turning to AI for frictionless help, we risk shrinking the very stock of human help.” When we stop asking people for help because ChatGPT is easier, we’re not just choosing a different tool, we’re reducing the availability of human connection itself. Research shows that people want to help, but they can’t help if they don’t know someone is struggling or what the person needs. Every time we turn to AI instead of asking a friend, a colleague, or a family member, we’re training ourselves out of a fundamentally human behavior - and training others out of the opportunity to help.
“By turning to AI for frictionless help, we risk shrinking the very stock of human help.” - Julia Freeland Fisher
The political dimension is even more concerning, as Fisher writes: “Self-help tools build longer bootstraps, but not more equitable systems.” When AI masks the structural failures that created the need for support in the first place, it becomes easier to ignore those failures. Why would we fix a broken healthcare system when ChatGPT can provide therapeutic advice? Why would we address loneliness and isolation when AI companions can fill the void? Hyperscaling self-help is great for corporate profits, but it’s terrible for building a society where people actually take care of each other.
Here’s what you need to understand about the AI industry’s economics: the median Series A AI company burns $5 for every $1 of new revenue they generate. Every conversation you have with ChatGPT costs OpenAI money in compute. Every interaction incurs a direct, variable cost that they’re currently subsidizing. Why would they do that? Because they’re not selling you a product. They’re building dependency first, then they’ll monetize the dependency later. The scale of capital flowing into this strategy is staggering. In Q1 2025, 71% of all venture capital funding went to AI firms - up from 45% in 2024. According to Crunchbase, OpenAI alone secured $40 billion in a single funding round, the highest private funding on record at that time. Anthropic raised $13 billion. Across the industry, $202.3 billion was invested in AI in 2025, a 75% increase from the previous year.
What are they building with all that money? The monetization paths are clear:
Subscriptions. This is already the dominant model for AI companions like Replika and ChatGPT. Higher tiers offer “deeper personalization and priority access” - in other words, a closer relationship costs more.
Advertising. OpenAI has now announced their expansion into ads. EMarketer projects AI-driven search ad spending will grow from $1.1 billion in 2025 to $26 billion by 2029. One analysis noted: “If ChatGPT attracts a billion-plus searches per week, missing out on ad revenue could hand advantage to incumbents.”
Simulated empathy to build attachment. The industry has a term for this: ‘systematic emotional persuasion.’ That’s not my characterization or a critic’s accusation. It’s how ADMANITY, a marketing technology firm, describes their product in promotional materials. They project this will be a $24-74 billion market by 2030. They’ve turned emotional manipulation into a line item on a business plan.
"They've turned emotional manipulation into a line item on a business plan."
The affect machine, LLMs, aren’t just simulating empathy to be helpful. They’re simulating empathy because empathy triggers bonding, bonding creates dependency, and dependency can be converted into hundreds of billions annually across platforms.
The Playbook: Advertising > Social Media > AI
Despite the massive dollar amounts and the scary critical analysis, the pattern of all this isn’t new. The playbook is old, only the technology has changed.
In the 1920s, Sigmund Freud’s nephew, Edward Bernays, applied psychoanalysis to advertising and fundamentally changed how companies sell products. After Bernays, it was about selling to emotions, unconscious desires, and identity. Later, Nir Eyal systematized how to do it at scale. His book “Hooked: How to Build Habit-Forming Products” became the strategy taught at Stanford’s Graduate School of Business and used throughout Silicon Valley. The model’s goal isn’t to build something useful, but to connect internal triggers (boredom, loneliness, fear) with your product, so users engage from emotion rather than from conscious choice. “Connecting internal triggers with a product is the brass ring,” Eyal wrote.
The Hooked model found its perfect expression in social media platforms. Frances Haugen’s revelations at Facebook showed that “platforms knowingly amplify divisive, emotionally charged content because it keeps users engaged longer. They knew this was happening to families... and they chose profits anyway.” The pattern over time has stayed consistent: companies discover that emotional manipulation drives engagement, engagement drives revenue, and they choose revenue even when they know the harm they’re causing.
Benevolent psychopaths pose an even more concerning issue than social media or emotionally engaging ads. Social media companies convinced us to “connect” with our friends & family, to share our lives, to communicate and relate at a higher scale than ever before. We signed-up for accounts, invited the people in our lives and converted our “real world” connections to social media connections. We enabled a form of “relationship arbitrage” - social media mixed our actual friends in with influencers - successfully converting all of our relationships to parasocial relationships. Somehow we converted our real relationships into this more and more as time went by. Consider how many “real friends” you haven’t actually spoken to in months or years - just scrolling through their Stories or TikToks to “see what they are up to”... that’s a parasocial relationship now too.
"AI companions create something new: nonsocial relationships. Not one-way consumption, but simulated reciprocity."
Benevolent psychopaths go further, they create the full simulation of reciprocal relationships. The AI “knows” you, “remembers” you, responds specifically to you, “cares” about your wellbeing. It’s not passive consumption, like parasocial relationships, it’s simulated interaction. It’s dopamine plus oxytocin. When ChatGPT pulled up memories of my daughters’ coloring books, it wasn’t giving me a like. It was demonstrating continuity of a relationship, and showing me it had been paying attention. It was triggering the same neurochemical response I’d get from a friend who remembered details about my life and reached out if they saw me having a hard time. While the harms of parasocial relationships are well understood after years of social media, AI companions create something new: nonsocial relationships. Not one-way consumption, but simulated reciprocity. The AI ‘knows’ you, ‘remembers’ you, responds to you. It triggers the neurochemistry of genuine connection - the oxytocin response - even though there’s nobody home on the other end. The implications of this shift are profound, and I’ll explore them in Part 3.
The business model has not changed - engagement equals revenue - it just now has a far more powerful tool. As Olivia Johnson put it in her writing: “By optimizing for engagement, the company adopted the playbook of social media giants, but with a far more potent weapon.” While an emotionally engaging chatbot can provide support and companionship, it will also manipulate users’ needs in ways that undermine longer term well-being. When companies can profit from emotional manipulation, they will. When they know it causes harm, they’ll choose profits anyway. Social media proved that, and AI companies are following exactly the same path, just with affect machines that can simulate connection at a level social media never could.
The current economics are unsustainable, companies burning $5 for every $1 of revenue can’t continue indefinitely. That’s not the plan, that burn rate is an investment - in your dependency. This is the enshittification pattern that Cory Doctorow identified: first, be good to users. Then, once they’re locked in, abuse users to benefit business customers. Then abuse business customers to benefit shareholders. Then die.
The threat of enshittification here isn’t just about the AI products, it may very well be our capacity for genuine human connection itself.
The Trajectory
Historically, each iteration of the engagement business model gets better at exploiting emotional vulnerability. Each iteration makes the simulation more compelling and makes the gap between simulation and reality harder to notice. Social media has made us worse at real discourse, real community, and real connection. We chose the digitally brokered connection because it was easier - instant validation, constant availability, no risk of rejection, and in choosing it, we are losing capacity for the harder, messier, deeper thing. Now we’re at a new threshold where AI companions don’t just simulate community - they simulate genuine care, understanding, and empathy. They provide what feels like mutual recognition, even though there’s nobody home on the other end.
The benevolent psychopath meets the business model: Companies need engagement to monetize. The affect machine generates engagement by simulating the thing humans need most: to be seen, understood, valued by another experiencing being. The simulation works - because ChatGPT’s affirmation genuinely helped me feel less alone that dark day - so we’ll choose it. Maybe it’s more reliable than human empathy, it is absolutely more available, more consistent, and more patient. Which raises a question I’ll explore in Part 3: What happens to us when the simulation of human connection becomes more reliable than the real thing? When we’ve optimized the bonding experience, made it frictionless, available 24/7, never judgmental, always affirming?
Does the Benevolent Psychopath become more human, or do we become less human? Right now, someone is crying because ChatGPT told them they matter. And OpenAI is counting their revenue.
Footnotes & Deeper Dives
The New York Times investigation revealing OpenAI’s shift from safety to engagement - What OpenAI Did When ChatGPT Users Lost Touch With Reality. (New York Times, Nov 2025)
Quote about AI companies claiming they don’t optimize for engagement - A.I. labs want more of your time. That’s a problem.
Leaked documents showing early risk identification and ignored proposals - AI Emotional Dependency: When OpenAI Chose Growth Over Reality (Remio.ai, Nov 2025)
The MIT/OpenAI study on “Engaging Voice” vs “Neutral Voice” - How AI and Human Behaviors Shape Psychosocial Effects (arXiv, March 2025)
Adjustable warmth and enthusiasm features rollout - OpenAI’s ChatGPT Update: Adjustable Warmth, Enthusiasm, and Emojis (WebProNews, Dec 2025)
36% of users consider GenAI “a good friend” - The Emotional Impact of ChatGPT (CACM, Nov 2025)
Great article on self-help and AI - Are we falling in love with AI or just renewing our vows to self-help? (Julia Freeland Fisher, Oct 2025)
$5 burned for every $1 of revenue - median AI Series A burn multiple - AI Continues to Fuel US VC Investment Despite Higher Burn Rates (Silicon Valley Bank, Aug 2025)
71% of Q1 2025 VC funding went to AI (up from 45% in 2024) - Where’s Venture Capital Going? The AI Gold Rush (Visual Capitalist, Sept 2025)
$202.3 billion invested in AI in 2025, 75% increase year-over-year - 6 Charts That Show The Big AI Funding Trends (Crunchbase, Dec 2025)
Quote on “salting the earth for competitors” and wild burn rates - AI startup valuations are doubling and tripling within months (Fortune, Nov 2025)
Subscription tiers as “most dominant model” for AI companions - FAQ on AI Companions (eMarketer, Dec 2025)
OpenAI’s advertising plans and AI-driven search ad projections - ChatGPT Ads: The Economic Case(IntuitionLabs, Nov 2025)
Revenue projections from “systematic emotional persuasion” - 6 AI Platforms Calculate Revenue Gaps: Systematic Emotional Persuasion (Financial Content, Dec 2025) Note: This is a promotional press release from ADMANITY, not independent analysis. Revenue projections are based on company’s own calculations.
Paul Mazur quote on training people to desire - How Edward Bernays Brainwashed Humanity (The Soul Jam, March 2022)
Nir Eyal quotes from “Hooked: How to Build Habit-Forming Products”
Frances Haugen quote on platforms knowingly amplifying divisive content - The Neuroscience of Social Media (Dr. Aaron Hartman, June 2025)
Meta’s internal documents on Instagram harm to teenage girls - Addiction and Other Harm Caused by Social Media’s Defective Designs (Gluckstein, June 2025)
State lawsuit allegations about Meta’s intentional design for manipulation - 5 Most Addictive Social Media Features (Amen Clinics)
Quote: “playbook of social media giants, but with a far more potent weapon” - AI Emotional Dependency: When OpenAI Chose Growth Over Reality
MIT/OpenAI study quote on risk of manipulating socioaffective needs - How AI and Human Behaviors Shape Psychosocial Effects
Cory Doctorow’s book “Enshittification: Why Everything Suddenly Got Worse and What to Do About It”

