Is Artificial General Intelligence Already Here?
- Manas Chakrabarti
- Feb 6
- 6 min read
“Just five years ago, we didn’t have AGI; now we do.”
That sentence appears not in a manifesto or a venture-capital blog post, but in an article published in Nature on February 2, 2026. It is offered almost casually, without flourish. But it unsettles the grammar used in most public conversations about artificial intelligence for the past decade.
For years, those conversations have been conducted almost entirely in the future tense. When artificial general intelligence arrives, work will change. Once machines can truly think, education will need to adapt. In the coming decades, society will face disruptions unlike any before. The future tense performs an important function here. It keeps intelligence safely over the horizon, preserving the belief that there is still time: time to adjust, time to reform, time to postpone harder questions about what our institutions are actually built for.
The Nature article challenges this habit of postponement directly. It does not claim that machines have become conscious or human-like. Instead, it argues that by several widely used operational definitions, artificial general intelligence is already present in existing systems. The argument is measured rather than dramatic, which is precisely why it deserves to be taken seriously.
What the Nature Article Is Actually Claiming
The claim made in the Nature article is neither mystical nor sensational. It does not say that machines are conscious, self-aware, or thinking like humans. It says something much narrower: by definitions researchers have used for years, artificial general intelligence already exists.
Those definitions are practical. They ask whether a system can operate across domains rather than within a single narrow task, whether it can transfer what it knows from one context to another, and whether it can handle unfamiliar problems without being rebuilt each time.
Stated plainly, the evidence is easy to see. Systems that once translated text now write essays, explain scientific ideas, tutor students, debug code, summarise legal documents, and reason through unfamiliar problems. They are uneven and sometimes make mistakes, but they are no longer confined to a single domain. It’s not that they do everything perfectly, but that they do many different kinds of things competently enough to matter.
This is why waiting for flawless performance misses the point. Human intelligence itself is riddled with error, approximation, and blind spots. We do not require people to be infallible before we call them intelligent. The same standard, the article suggests, should apply here.
The article is also explicit about what it does not claim. It does not argue that these systems possess judgement, values, or moral responsibility. It does not suggest they should be trusted without scrutiny. The claim is simply that a threshold long assumed to lie in the future has already been crossed in practice.
The World Is Already Behaving Differently
The strongest evidence that something has shifted is not found in benchmarks or definitions, but in behaviour. Long before institutions change their language, they change their practices. And across work and everyday life, those practices already reflect the reality that general cognitive capability has become abundant.
Consider how quickly unaided human work has become suspect. In many professional settings, written output is now assumed to be AI-assisted by default. Reports, summaries, analyses, even emails are no longer treated as reliable signals of individual competence unless produced under tightly controlled conditions. For decades, written work stood in for thinking. Now it often requires defence.
Outside formal institutions, everyday habits have changed just as quickly. People routinely consult AI systems to draft messages, plan trips, understand medical information, learn new topics, or think through decisions. This behaviour is no longer treated as novel or futuristic. It has become mundane. The fallibility of these systems has not prevented their adoption, just as human fallibility never prevented reliance on other humans. What matters is usefulness across contexts, not perfection.
Schooling and the Scarcity It Depends On
No institution depends more heavily on the scarcity of general intelligence than schooling. For decades, its central promise has been simple and morally persuasive: acquire the right skills, demonstrate them through assessment, earn credentials, and gain access to work and security. That promise only holds if individual cognitive performance can be treated as both scarce and attributable.
That assumption is now under pressure.
In classrooms and universities, the most visible response has been procedural anxiety. Essays, projects, and take-home assignments are increasingly treated with suspicion. Teachers are asked to distinguish “student work” from “assisted work,” often without reliable tools or shared norms. The result is an erosion of trust. Work that once functioned as evidence of understanding now requires authentication.
The institutional response to this uncertainty has been telling. Rather than rethinking what counts as learning, many systems have retreated to enforced scarcity. More invigilated exams. More handwritten assessments. More emphasis on producing work under surveillance. These moves are often framed as a return to rigour, but they function primarily as containment. When general intelligence becomes abundant outside the classroom, schools try to recreate scarcity inside it.
The deeper problem is that much of what schooling has rewarded for years is now trivially reproducible. Summaries, explanations, structured arguments, and competent analysis no longer function as clear signals of individual mastery.
What becomes visible, uncomfortably, is that schooling was never only about learning. It was mainly about sorting. Credentials were not just records of understanding; they were filters. When cognitive output is no longer scarce, those filters stop working, and the legitimacy of the system begins to unravel.
The Objections That Refuse to Die
At this point, a familiar set of objections usually appears. Artificial intelligence, we are reminded, makes mistakes. It hallucinates. It contradicts itself. It merely predicts the next word rather than understanding anything at all. These claims are often delivered as conversation-stoppers.
They are not.
Yes, these systems make mistakes. So do humans. Constantly. In medicine, in law, in education, in public life. Error has never disqualified intelligence in humans.
Hallucination is treated more seriously, and rightly so in high-stakes contexts. But here too the comparison is instructive. Human cognition hallucinates routinely. We confabulate memories, impose coherence on fragments, and mistake confidence for accuracy. We rarely call this hallucination. We call it explanation, judgement, or narrative.
The most persistent objection sounds technical and therefore decisive: these systems are “just” predicting the next word. But this collapses mechanism into meaning. Prediction is not trivial. To predict well across unfamiliar contexts requires internalising deep regularities in language, reasoning, and structure. Humans do this constantly. Calling it “just prediction” does not diminish it. It names the core operation of intelligence itself.
All these objections share a common move. They compare artificial systems not to ordinary human cognition as it actually exists, but to an idealised version of human intelligence at its best. They demand infallibility from machines while granting humans endless grace. Under those standards, no real intelligence would ever qualify.
Which brings us to the final dismissal: the sheer volume of low-quality, AI-generated content now flooding the internet.
The flood of AI-generated slop clogging social media is often presented as evidence that these systems are shallow, derivative, and unintelligent. This is a comforting misreading. It does not reveal the poverty of artificial intelligence; it reveals the poverty of human intention. When given a tool capable of producing language at scale, we did not ask it to help us think better, see more clearly, or speak with greater care. We asked it to post more, faster, louder. The result looks ugly not because the intelligence is weak, but because our standards already were. AI did not lower the bar; it made it visible.
The Question We Kept Postponing
If artificial general intelligence is already here, then the crisis is not technological. It is institutional and moral. The technology did not strip work or learning of meaning. It exposed how thin our definitions of meaning had become.
Education was justified by usefulness. Work was justified by productivity. Intelligence was justified by scarcity. Once those justifications weaken, a question we have postponed for decades returns with force: what is learning for, when it cannot justify itself economically?
That question has no quick answer. But refusing it is no longer an option.
As long as we continue to speak of AGI as something that will arrive later, we preserve the illusion that our institutions still have time. If the argument in Nature is correct, that time has already passed. What remains is not preparation but reckoning.
Comments