The AI Book Worth Reading: AI Snake Oil
My new recommendation for anyone looking to learn a bit more about AI: Arvind Narayanan and Sayash Kapoor's AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Differ
While it feels like generative AI innovation is happening dramatically on a daily basis, this book remains remarkably relevant. Its core arguments have only become more important as AI product announcements and deployments accelerate. What I appreciate most: the refusal to be swept up in either a blindly optimistic or overly pessimistic view. Their book takes AI seriously, both genuine capabilities and genuine limitations, without surrendering to hype. You can be critical of AI snake oil while acknowledging real progress. This nuanced thinking is what we need. Not hot takes, not apocalyptic warnings, not breathless hype. Just careful analysis of where we actually are and where we might be going.
For anyone working with AI, fearful about AI, or trying to understand what’s actually happening, this is essential reading. The authors manage to be simultaneously optimistic about some AI applications and deeply critical of AI hype. That balance is rare and valuable. This book does something uncommon. It’s technically grounded and philosophically honest about where we actually are with artificial intelligence. Below are three areas that stood out for me, though the entire book is worth your time.
What do we mean when we say “AI”?
The authors are cautiously optimistic while being clear-eyed about limitations. This distinction matters. When we talk about “AI risk,” we conflate present-day harms from predictive AI with speculative future concerns about generative AI. Different problems need different responses. Their call to stop talking about “AI” as if it’s one thing is crucial. Narayanan and Kapoor draw a sharp line between predictive AI and generative AI, and further discuss AI for content moderation. The ability to distinguish between the many forms of AI, enables you to have a real understanding, without fear, of AI, its capabilities, and our likely future together. Predictive AI is claimed, by businesses and governments, to be able to predict who will commit crimes, who will succeed at a job, and how long you’ll stay in the hospital. This is where actual harm is happening right now, deployed in criminal justice, healthcare, hiring, insurance with far more confidence than the science warrants. Most predictive AI tries to do something impossible: predict inherently unpredictable human behavior. Be skeptical of anyone selling it. Generative AI (e.g. ChatGPT, Stable Diffusion) is different. Real technical progress, but early and not yet as reliable as we may expect.
How fast is AI innovation actually happening?
The book uses a “ladder of generality” framework: AI capabilities gradually increase in flexibility and scope, each rung more general and powerful than below. Current LLMs are on the seventh rung. This challenges the binary thinking around AGI, artificial general intelligence, that is promoted in media and announcements by AI companies. There’s no single threshold where AI suddenly becomes “generally intelligent” in some superhuman way. Just continuous progression toward more capable systems. AI progress has been incremental and gradual, even when ChatGPT made it feel sudden.
“Chatbots are trained to produce plausible text, not true statements.”
This connects to what I’ve been exploring about LLMs simulating emotional affect without experiencing it. The authors discuss chatbots as “bullshitters” in Harry Frankfurt’s sense, trained to produce plausible text, not true statements. No source of truth during training, just pattern learning. The philosophical implications matter. These systems build internal representations of the world through training, but those representations differ from ours, impoverished because they don’t interact with the world like we do. Yet they’re still useful, allowing capabilities that would be impossible if they were just “giant statistical tables.”
Why All the Hype?
The book identifies four culprits who build AI hype: companies with commercial interests, researchers seeking attention and funding, journalists who amplify without verification, and public figures spreading myths. They also discuss criti-hype, a term coined by Lee Vinsel, describing criticism that portrays technology as all-powerful instead of calling out its limitations. When we say “AI will take all our jobs” or “AI poses an existential threat,” we think we’re being critical. But we’re overstating AI’s capabilities in ways that benefit companies wanting less scrutiny.
“Accepting the inherent randomness and uncertainty in many of these outcomes could lead to better decisions, and ultimately, better institutions.”
The existential risk section is especially good. The authors argue fears about rogue AI rest on flawed premises, particularly the notion that AI will cross some critical threshold. They show progress has been gradual and incremental, not punctuated by sudden breakthroughs, and that current innovation is based on 80 years of work.
Criticisms
No book is without its critics, and AI Snake Oil has received some valid pushback worth considering.
Western Focus
The book primarily examines US contexts, particularly what I would consider a focus on Silicon Valley. While it touches on the global impact of labeling and training data economies, there isn’t much discussion about how AI plays out in other regions.[^1] This is a fair criticism. That said, the media cycle and political action around AI is intensely focused on Silicon Valley, so the book’s scope seems appropriate for its purpose, understanding the hype machine and where it originates.
Too Skeptical
Some reviewers, including Joshua Rothman in The New Yorker, argue the authors are “deeply skeptical” when “perhaps they shouldn’t be.”[^2] This criticism confuses skepticism with pessimism. Healthy skepticism is often the catalyst for critically improving an area. I’d rather they side on skeptical than maximal. The AI space has enough cheerleaders with power, we need more people willing to call out snake oil when they see it.
Limited Focus on Power (Political, economic, etc)
Edward Ongweso Jr. critiques the book for not engaging deeply enough with who holds power in AI.[^3] He’s right, the authors focus more on how the technology works than who controls it. But I think this is appropriate. The politics of power in big tech is complex enough to deserve its own book. Mixing that discussion with technical analysis of AI capabilities would convolute both topics. Better to have a separate artifact outline power dynamics clearly in its own work. I won’t say I plan to write anything on the level of this book, but I will explore economic and political power as it relates to motivation and incentives in problematic AI products and perception in this Substack.
AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference by Arvind Narayanan and Sayash Kapoor. Princeton University Press, September 2024.*
[^1]: Alexya Martinez, book review in *Journalism and Mass Communication Quarterly* (as cited in Wikipedia article on AI Snake Oil)
[^2]: Joshua Rothman, [”Two Paths for A.I.”](https://www.newyorker.com/culture/open-questions/two-paths-for-ai) *The New Yorker*, May 27, 2025
[^3]: Edward Ongweso Jr., [”AI Scams Are the Point”](https://newrepublic.com/article/188313/artifical-intelligence-scams-propaganda-deceit) *The New Republic*, November 21, 2024


