🔗Against "Brain Damage" - by Ethan Mollick
1. Introduction
Mollick frames the central anxiety about generative AI: the fear that relying on it may somehow harm our ability to think, learn, or create — in essence, that it may cause “brain damage.” He explains that while the concern is metaphorical, it reflects a deeper unease about what happens when external tools begin to take over roles traditionally tied to human cognition. The question is not whether AI literally damages the brain, but whether its widespread use might dull or displace essential mental processes. Mollick uses this concern as a launching point to examine AI’s actual impact across various domains of cognition.
2. The Learning Brain
Mollick discusses how AI interacts with the way we learn. He acknowledges that learning often involves effort, struggle, and cognitive friction — processes that are integral to building understanding and mastery. By offering fast, polished answers, AI tools can bypass this valuable friction, potentially short-circuiting deep learning if students use them as shortcuts rather than as scaffolding.
However, Mollick doesn't argue that AI is inherently harmful to learning. Instead, he suggests that the risk lies in unreflective usage. If learners rely on AI to provide answers without engaging critically or trying to understand the underlying logic, they risk undermining their own growth. He compares this to using a calculator before understanding arithmetic: it may produce the right result, but without the foundational knowledge, it becomes meaningless.
On the flip side, Mollick presents evidence that when used thoughtfully, AI can enhance learning. For instance, students can use AI to test their understanding by asking it to simulate roles (like a tutor or an examiner), or to rephrase complex material in simpler terms. AI can be a powerful tool for generating feedback, posing questions, and encouraging iteration — if learners are actively involved in the process.
Ultimately, AI's role in education depends on pedagogy and mindset. The key is not banning AI, but designing learning experiences that integrate it productively while reinforcing critical thinking, self-explanation, and intellectual curiosity. Used this way, AI becomes an accelerant to learning rather than a shortcut around it.
3. The Creative Brain
Mollick turns next to creativity, arguing that fears about AI destroying creativity often misunderstand both the nature of AI and the human creative process. Creativity is not a fixed trait but a skill — one that thrives on exposure to new ideas, constraints, and experimentation. AI, rather than replacing creativity, can act as a muse or collaborator that offers unexpected suggestions, perspectives, or juxtapositions that a human might not think of on their own.
He notes that creative professionals are already using AI in this way: to brainstorm, to explore different tones or styles, or to generate variations on a concept. These interactions don't diminish creativity but can expand its boundaries, giving humans more material to work with and more directions to explore. Importantly, Mollick emphasizes that creativity with AI still requires human judgment — the ability to recognize what is good, original, or meaningful.
That said, he also acknowledges a potential risk: that over-reliance on AI could lead to homogenization or laziness if users settle for the first suggestion or fail to refine outputs. Just as using templates too often can lead to formulaic results, uncritical dependence on AI may stifle originality unless balanced with active engagement and iteration.
In the end, Mollick argues that AI should be treated not as a replacement for the creative mind but as a flexible, generative partner — one that enhances our capacity to imagine and make, provided we remain the ones steering the process.
4. The Collective Brain
Mollick explores how large language models (LLMs) — trained on human discourse — can mimic human interaction, sometimes with startling realism. He argues that these systems offer a form of simulated empathy or companionship that may affect how we relate to others, or how we process our own emotions.
On the positive side, AI tools can help people rehearse difficult conversations, explore different emotional responses, or even reflect on problems from new angles. They can act as nonjudgmental listeners or mirrors, facilitating self-awareness and emotional regulation. This makes them potentially useful in therapy, coaching, or social training contexts.
However, Mollick also warns of ethical and psychological pitfalls. Interacting with a simulated person — even one that is helpful or friendly — may distort our expectations of real human interaction. The danger isn't that the AI will manipulate us maliciously (though that’s possible), but that we may begin to prefer its predictability, politeness, or constant availability over the messiness of actual relationships.
He concludes that while AI may not replace real social connection, it is becoming part of our collective “social brain” — an ever-present interlocutor that can guide, influence, or distort our interpersonal lives. We need to cultivate awareness and intentionality in how we incorporate these tools into our emotional and relational habits.
5. Against “Brain Damage”
Mollick argues that AI doesn’t cause literal brain damage, but if misused — as a crutch rather than a scaffold — it can inhibit growth, dull creativity, and erode critical engagement. The goal, then, isn’t to reject AI but to shape our habits around it thoughtfully. Like any powerful tool, AI magnifies the intentions and skills of its user. If we engage with it passively, it may lead to intellectual stagnation. But if we treat it as a partner in learning, creativity, and connection, it can help us build — rather than break — our mental capacities.



