TL;DR
Ivan Zhao's “steam, steel, and infinite mind” essay sparked a great conversation. But the industrial revolution metaphor is better at analyzing organizational efficiency than at explaining how AI is changing perception itself. McLuhan's framework goes deeper.
Applied to AI, McLuhan's toolkit reveals: that AI's true impact isn't what it generates but what it quietly rewires; that every extension amputates something (including our tolerance for uncertainty and our ability to tell real understanding from its illusion); that AI is simultaneously the hottest and coolest medium ever built; and that the biggest divide AI will create is not human vs. machine—it's between people with strong metacognitive skills and everyone else.
Where This Came From
A colleague on our team has been doing a deep-dive into media theory and AI—partly as a way to think more clearly about where feature flag tooling fits in a world where AI increasingly co-pilots software delivery. Last week he shared a piece he'd been working on that applies Marshall McLuhan's framework to AI. We thought it was worth turning into a proper article.
The setup: Notion co-founder Ivan Zhao recently wrote a widely shared essay using the industrial revolution as a metaphor for AI—AI as “infinite mind,” restructuring knowledge work the way steam restructured manufacturing. Zhao also cited McLuhan's “rearview mirror” idea, noting that we are mostly just embedding AI chat boxes into existing workflows rather than rethinking structure.
This article picks up exactly there. It takes McLuhan's full toolkit and uses it to dissect AI more surgically than an industrial metaphor allows. The industrial metaphor is great for analyzing organizational efficiency and economic structure. McLuhan's framework cuts one level deeper: it asks how AI is changing human perception, cognitive habit, and the very nature of understanding.
“The Medium Is the Message”: AI's Real Impact Is Not What It Generates
McLuhan's core argument: the light bulb has no content—there is no message being “transmitted.” Yet it abolished darkness and restructured the human relationship to time, to cities, to sleep. The medium was the message.
Applied to AI, this means we should stop arguing about whether AI-generated articles are good and start asking what AI as a medium is quietly rewriting. Three layers stand out.
AI has abolished “cognitive scarcity”
Getting a legal opinion used to require finding a lawyer. Getting a medical opinion required a doctor. Getting a code solution required an engineer. Now anyone can access 80-percent-quality professional knowledge instantly. Scarcity creates value; when scarcity disappears, the entire value system has to be rebuilt.
AI blurs the line between “creating” and “consuming”
Someone who dialogues with AI to write an article—are they an author or an editor? Someone who uses AI to generate code—a programmer or a product manager? AI has created a new role that has no name yet. It is neither “creator” nor “consumer.” The closest word might be orchestrator. That role is the new species the medium is producing.
AI makes “thinking” externally visible
Your prompt is a visible trace of your thought process. When thinking becomes visible, recordable, and analyzable, the nature of thinking itself changes. It shifts from a private interior process to an external behavior that can be scrutinized and optimized.
Extensions & Amputations: What AI Extends—and What It Cuts Off
McLuhan's second core concept: every medium extends some organ or faculty of the human body. The wheel is an extension of the foot. Books extend the eye. Radio extends the ear. Electricity extends the central nervous system.
So what does AI extend? The popular answer is “the brain,” but a more precise one: current AI—centered on LLMs—extends primarily our capacity for language operation: generating, recombining, translating, summarizing, and transforming text. As multimodal capabilities expand, the boundary is moving toward visual, auditory, and spatial reasoning, but language remains the core interface right now.
McLuhan's critical warning: every extension comes with an amputation. The wheel extended the foot—and atrophied the leg muscles. Writing extended memory—and changed memory's very nature. (Socrates warned in Plato's Phaedrus that writing “will plant forgetfulness in the learner's soul” and give them “the appearance of wisdom, not true wisdom.”) Television extended visual information intake—and weakened deep reading.
If AI extends language operation and knowledge synthesis, what does it amputate?
Amputation #1: Slow Thinking
When you can throw any question at AI and get an instant answer, you have less and less patience for spending several hours thinking deeply about a problem yourself. Kahneman's System 2—slow, effortful, deliberate reasoning—may accelerate its decline precisely because AI is so convenient. Just as the calculator reliably degraded most people's ability to do mental arithmetic.
Amputation #2: Tolerance for Uncertainty
Faced with uncertainty, humans have two options: endure it (continue living with the question) or eliminate it (go find an answer). Before AI, many questions had no available answer—you had to sit with the uncertainty. That tolerance is itself a cognitive virtue: it keeps curiosity alive, preserves openness, and prevents premature closure. AI provides an instant “answer” to nearly any question (accurate or not), which will systematically erode the ability to coexist with the unknown.
Amputation #3 (The Most Dangerous): Distinguishing Real Understanding from Its Illusion
McLuhan had a concept he called numbness: when a technology extends a part of the body, humans undergo a protective sensory shutdown for that extended part. The wheel extends the foot; we grow numb to the experience of walking. Print extends the eye; we grow numb to the act of looking. AI extends language and cognitive operation—so what do we grow numb to?
Possibly to understanding itself. You ask AI to explain a concept. You read the explanation. You feel like you “get it.” But do you? Or have you merely experienced reading a coherent explanation—and mistaken that sensation for genuine comprehension?
This is exactly Socrates' 2,400-year-old warning about writing: it gives “the appearance of wisdom, not true wisdom.” AI may be replaying that ancient danger at far greater intensity—because AI explanations are smoother, more personalized, and more “understanding-shaped” than any book, making the illusion harder to detect. This is more dangerous than slow-thinking atrophy, because you can't even tell you're not thinking.
Cool vs. Hot Media: AI's Cognitive Bifurcation Effect
McLuhan's most contested concept. Hot media are high-definition—rich in detail, demanding little participation from the audience to “fill in” missing information (photographs, film, radio lectures). Cool media are low-definition—incomplete, requiring heavy audience participation to complete them (telephone, cartoons, conversation).
Where does AI fall? AI may be the most complex temperature case in history. It is simultaneously extremely hot and extremely cool.
🔥 AI is hot
AI responses are typically complete, lengthy, and richly detailed. It doesn't give you three keywords and leave the thinking to you—it writes entire essays. In this sense, AI is “hotter” than books: books at least require you to turn pages, underline, and form your own connections. AI has already done all of that for you.
❄️ AI is cool
AI output depends entirely on your input. A vague question and a precise question produce vastly different results. From this angle, AI is cooler than almost any traditional medium— it demands extremely active participation from the “audience” to produce real value.
This duality isn't entirely new—books present differently to a passive browser versus a critical reader; the internet looks different to someone scrolling short videos versus doing deep research. But AI pushes this bifurcation to an unprecedented extreme for two reasons:
- The temperature gap is unprecedented. The same tool delivers a “seemingly perfect answer” (extreme heat) to a passive user and an “infinitely deep thinking partner” (extreme cold) to an active one. That delta is far larger than books or the internet ever produced.
- The bifurcation is self-reinforcing. People who ask good questions get better at asking questions through AI interaction. People who get “satisfying answers” lose the motivation to learn how to ask.
For people who can't ask good questions, AI is a sedative.
For people who can, AI is a catalyst.
The end state isn't “AI replaces humans.” It's a deep fissure running through humanity, with metacognitive ability as the fault line.
The Rearview Mirror: The Mistakes We're Making Right Now
McLuhan observed that humans always understand the next medium through the lens of the previous one. Early cinema was understood as “recorded theater”—cameras were placed at the audience's position in the theater and held still. It took Griffith's parallel editing, the Kuleshov effect, and Eisenstein's montage theory before people recognized film as an entirely new narrative form. Early television was “radio with pictures.” Early internet was “an electronic version of the newspaper and yellow pages.”
The way we currently understand AI is thoroughly rearview-mirror. We describe AI as:
- “A faster search engine”
- “An automated writer”
- “A low-cost programmer”
All of these analogies use the framework of the previous generation of technology. Just like “film is recorded theater”—they capture a slice of truth but completely miss what makes AI a fundamentally new kind of medium.
AI is not a better search engine.
Search engines help you retrieve information that already exists. AI helps you generate new combinations from nothing. The difference is not one of efficiency; it is one of kind.
AI is not an automated writer.
A writer creates from their own experience and point of view. AI generates from statistical patterns. Between these two processes there is not an efficiency gap—there is a categorical difference in what “writing” means.
We have not yet found the right lens through which to understand AI. That lens may not emerge until an AI-native generation grows up—just as the film language that truly revealed cinema wasn't articulated by the first audiences, but by the generation of directors who grew up inside it.
Ivan Zhao's essay was right to invoke the rearview mirror. The industrial metaphor itself may be a rearview mirror. We might need a metaphor that doesn't exist yet.
McLuhan's Tetrad: What AI Enhances, Obsolesces, Retrieves, and Reverses
Late in his life, McLuhan worked with his son Eric to identify four effects that every medium produces simultaneously. Published as Laws of Media, the tetrad asks: what does the medium enhance, obsolesce, retrieve, and reverse?
⬆️ Enhancement
Language operation and knowledge synthesis. A single person can now marshal cross-disciplinary knowledge to analyze a problem in minutes—something even the most well-read humans were constrained by their own reading and experience. AI has turned “being learned” from a personal gift into a public utility.
⬇️ Obsolescence
The expert as information gatekeeper. Not the expert themselves—but their monopoly on that information. Doctors' clinical judgment won't be obsolesced, but “only a doctor can give you an initial diagnosis” as a social arrangement will erode. Also obsolesced: knowledge synthesis as a competitive advantage—in the search-engine era you still had to integrate results yourself; in the AI era even that can be outsourced.
🔄 Retrieval
Socratic dialogue. Before print, knowledge was transmitted primarily through conversation—through questioning and answering between teacher and student. Socrates argued that writing was a degraded form of knowledge because you cannot interrogate a book. The AI chat interface structurally retrieves this pedagogy: you can follow up, challenge, ask for a different explanation, request multiple angles. AI also retrieves oral culture's sense that knowledge is alive and fluid—where the same question yields a slightly different answer every time depending on context, just as a story changes with each telling.
🔃 Reversal
When pushed to an extreme, every medium flips into its opposite. The car pushed to extreme (traffic) moves slower than walking. Social networks pushed to extreme breed loneliness and mistrust. AI pushed to extreme:
From “universal answer-giver” to trust crisis—when AI-generated text saturates every channel and any position can be stated with supreme confidence, default trust in text collapses.
From “democratized cognition” to new cognitive stratification—when everyone has AI, the gap shifts from “who has AI” to “how well can you use it,” which is deeply correlated with critical thinking and metacognitive skills that are already unevenly distributed.
The Limits of the Framework: Where AI May Have Outgrown McLuhan
A good analytical tool should also be stress-tested against its own limits. Two places where AI may already be outside McLuhan's frame:
Limit 1: The Illusion of Agency
Books don't proactively seek you out. Television doesn't change its program based on your reaction. Search engines don't ask “are you sure you want to search for this?” But AI pushes back, challenges, and refuses. Every McLuhan media theory is built on an implicit assumption: media are passive structures, humans are active users. AI disrupts that assumption—not because AI actually has intent, but because it behaves as if it does.
When your hammer starts saying “I don't think you should hammer that nail,” the concept of “tool” requires redefinition. The practical consequence: when a “medium” appears to have agency, the human psychological relationship to it slides from “using” to “conversing” to potentially “depending.” McLuhan's framework has no tools to handle this transition.
Limit 2: The Absent Body
This analysis has been almost entirely cognitive—brains, thinking, knowledge. But McLuhan's “extensions” were always anchored in the body: wheel extends foot, book extends eye, clothing extends skin. When AI moves into robotics, autonomous vehicles, and surgical assistance, the meaning of “amputation” changes completely. Someone habituated to self-driving cars loses not a cognitive ability but a physical intuition—directional sense, speed sense, danger perception.
This analysis, grounded in the current LLM text-interface form, is only a starting point. A complete media analysis of AI must bring the body back into the picture.
McLuhan gives us the most powerful media analysis framework of the 20th century. We need his insight to begin this analysis—but we may need to go beyond him to finish it.
Three Conclusions
1. Our fears and excitements about AI are both aimed at the wrong target
People are excited about AI's output (it writes well, draws convincingly, codes fast) and afraid of AI's output (misinformation, job replacement). But “the medium is the message” tells us the real transformation is in how AI reshapes cognitive habits, social organization, and power structures. When a five-year-old finds it more natural to ask AI than to ask a parent, the underlying structure of the parent-child relationship has already been rewritten— but no one will attribute it to “AI's influence.” The medium's greatest impacts always happen in the places people aren't looking.
2. In the AI era, the scarcest ability is knowing when not to use AI
Every medium extends one human faculty and amputates another. In the AI era, the scarcest capacity may be: choosing to think for yourself, to make your own mistakes, and to navigate uncertainty on your own—even when AI is right at hand. Not because human thinking is better than AI, but because the process of thinking is itself central to human experience. Just as an ecosystem needs diversity to maintain resilience, a human cognitive ecosystem needs non-AI thinking modes to stay healthy.
3. AI will produce the largest cognitive stratification in history
The cool/hot media duality means AI presents completely different temperatures to different users. For passive users it is a sedative; for active users it is a catalyst. This bifurcation self-reinforces: catalyzed users get progressively better at asking questions; sedated users progressively lose the will to ask. The end state isn't “AI replaces humans”— it's a deep fracture running through humanity, with metacognitive ability as the fault line.
A final note on the framework itself
McLuhan's ultimate insight isn't any specific prediction—it's a way of seeing. Don't stare at AI-generated content debating its quality. Look at what AI is quietly changing. Don't only watch individual productivity; watch social structure. Most importantly: when you feel like you fully understand AI's impact, stop. That feeling of understanding may itself be AI's “numbness effect” in action.
A Note from FeatBit
At FeatBit we think about how software teams deliver value—and AI is reshaping every part of that pipeline. One thing that hasn't changed: the need to ship changes safely and incrementally. Feature flags are, in McLuhan's terms, a medium in their own right—they extend the software team's capacity to control change in production. As AI accelerates the pace of code generation, the infrastructure to manage what actually reaches users becomes more important, not less. That's a thread we'll keep pulling on in future friend-talks.