The Trouble with Effortless Learning
- Manas Chakrabarti
- Nov 28, 2025
- 5 min read
A 2024 paper from the University of Pennsylvania came with an unambiguous title: “Generative AI Can Harm Learning.” In this study of high-school students, those who used a generative-AI assistant during practice scored dramatically better in the short term — 48 to 127 percent higher than peers who worked unaided. But when tested later, their scores dropped 17 percent below those of students who hadn’t used AI at all. The explanation was simple: the tool had become a cognitive crutch. Students had bypassed the mental struggle that cements knowledge.
It’s a finding that seems to confirm every educator’s unease. The age of “effortless learning” has arrived — and it appears to hollow out the very things we most value: effort, persistence, the slow construction of understanding. And yet, as compelling as this evidence is, it sits within a much larger story — about technology’s messy adolescence, the myths we tell around AI, and the deeper pedagogical question still worth asking: what forms of effort do we need to preserve?
A recent argument in MIT Technology Review offers a useful frame. New technologies, it suggests, always unleash a flood of trash before they mature. The printing press did not inaugurate a golden age of literature; it unleashed pamphlets, fake prophecies, cheap erotica, and pseudoscience. The early internet flooded the world with spam long before it gave us libraries, open-source communities, and MOOCs. Even photography went through decades of gimmickry and overproduction before finding its artistic language. The “trash phase” is not a bug. It is the compost. It is the messy substrate from which new norms, practices, and aesthetics eventually emerge.
Perhaps, then, the first wave of AI-generated mediocrity — the formulaic essays, the soulless reports, the overconfident hallucinations — is not the end of knowing but the noisy, necessary adolescence of a new technology. Like all adolescents, this phase is characterised by imitation, excess, and impulsive experimentation. It tells us little about what the technology will eventually mean; it tells us much about the immaturity of our norms around it.
Part of the confusion also comes from the stories we build around AI. Another piece in the same magazine argues that “AGI” — the idea of a godlike, super-capable artificial mind — has become one of the most consequential conspiracy theories of our time. Not because it is malicious, but because it is mythological: a grand, speculative narrative gripping the public imagination. The problem is not the ambition of the idea; it is that this narrative distorts our attention. Schools and policymakers feel pulled between two equally unrealistic poles — the utopian dream of AI-rendered abundance and the dystopian fear of AI-fuelled collapse. Both are dramatic. Neither helps a teacher plan tomorrow’s lesson.
We don’t need salvation or apocalypse. We need pedagogy.
The UPenn study rests on a well-established principle from cognitive psychology: the idea of “desirable difficulties,” coined by Elizabeth and Robert Bjork. We learn more deeply when the learning process feels effortful — when we retrieve information, generate explanations, and wrestle with problems just beyond our comfort zone. Generative AI short-circuits this beautifully inefficient process. It offers fluency without struggle; it replaces slow effort with finished prose. The result is what researchers call “illusions of competence” — students feel as if they understand, but the understanding dissolves when pressure arrives.
This should not surprise us, but the conclusion we draw matters. If our only response is to ban AI, then we have misunderstood both the psychology and the technology. Cognitive outsourcing is not new; humans have done it for millennia. Writing externalised memory. The abacus externalised calculation. Search engines externalised retrieval. Each outsourcing freed us to do something else — imagine, analyse, collaborate.
The question is not whether outsourcing happens, but what kind of outsourcing strengthens thinking rather than weakens it. If generative AI writes for us, perhaps the next frontier of difficulty — the next site of desirable struggle — is judgment, not composition; discernment, not recall; shaping ideas, not assembling sentences. The locus of effort may shift rather than disappear.
When I taught Global Citizenship, I sometimes asked students to use ChatGPT to generate an essay — and then we critiqued it together in class. They laughed at the confident generalisations, dug into its blind spots, and rewrote large sections. The learning happened not in writing the answer but in interrogating it. The tool didn’t replace effort; it redirected it. This, I suspect, is where education finds its footing: not in defending old forms of effort, but in cultivating new ones.
The challenge is that schools today feel trapped between two temptations. One is to clamp down on AI to preserve the sanctity of learning. The other is to surrender to AI as the inevitable future. Both positions are understandable; both are mistakes. The first refuses to accept that tools reshape cognition. The second refuses to recognise that human skills — attention, reflection, intellectual humility — are not automatically retained in the presence of AI assistants.
Navigating between these extremes requires a different posture altogether. Perhaps the metaphor of adolescence is helpful again. Humanity is in its teenage years with AI: exuberant, reckless, boundary-testing. Mistakes are guaranteed. But they are also generative — if we learn from them. Tight control will not teach discernment; blind optimism will not teach responsibility. What we need is attentive flexibility: open enough to experiment, grounded enough to reflect.
The deeper question, then, is not whether AI makes learning effortless. It is whether we can help young people recognise which efforts remain indispensable. Some forms of difficulty are timeless — grappling with ambiguity, listening deeply, noticing what others overlook, constructing an argument that reveals original thought, sitting with discomfort long enough for insight to form. AI can assist these things, but it cannot substitute for them. It can scaffold understanding, but it cannot bestow curiosity. It can help us see more, but it cannot choose what is worth seeing.
So the real work is not to block AI or to worship it, but to recalibrate the ecology of difficulty. To design learning where technology amplifies attention rather than erodes it, and where students feel — not just know — the difference between thought and mere output.
The hardest truth, however, is that we don’t yet know how any of this will turn out. Not the researchers, not the critics, not the evangelists, not the politicians, not the tech CEOs. We are all guessing at futures shaped by tools that are themselves still evolving. There is no stable horizon yet. But there is stable ground: how humans learn. If we hold on to the slow, human work — reflection, connection, inquiry — then the tools will eventually find their mature place. The trash will settle. The myths will fade. The adolescent turbulence will give way to a steadier rhythm.
In the meantime, we navigate as educators have always navigated: with curiosity, with humility, and with a steady refusal to be rushed into either panic or euphoria. Because in the end, real learning has never been effortless. And the work of building good judgment — in an age of infinite shortcuts — might be the most important difficulty of all.
Comments