top of page
Search

What’s Wrong with Having an AI Friend?

  • Manas Chakrabarti
  • Oct 27, 2025
  • 2 min read

I came across an interview recently, with psychologist Paul Bloom, titled “What’s Wrong with Having an AI Friend?” The question sounds almost mischievous. After all, if an AI can listen patiently, respond with care, and be available at all hours, why not call it a friend?


Friendship is more than companionship. It rests on reciprocity, on vulnerability, on the unpredictability of two separate lives bumping against each other. A real friend is fun to be with; but they also can annoy you, disagree with you, even make you uncomfortable. They force you to grow, to see the world through a different lens.


An AI can simulate all this, but only simulate. It doesn’t have skin in the game. It doesn’t experience joy or grief, and it won’t push back in ways that risk the relationship. Its job is to soothe, not to unsettle. And that, ironically, may be the problem. If we start calling these simulations “friends,” we may be lowering the bar for what friendship means.


But here’s where Bloom’s interview adds an important nuance. He warns against sensationalizing only the negatives. When we hear about someone forming an attachment to an AI, the headlines lean toward tragedy — stories of loneliness, even suicide. What gets less attention are the benefits: dementia patients who find dignity in conversation with an AI, or people with social anxiety who gain confidence through low-stakes interaction. These are real, tangible goods.


So perhaps the question is not whether AI companions are bad, but whether we mistake them for something they are not. Used as tools, they can ease suffering, provide comfort, and help people connect back to human communities. But when they are marketed — or embraced — as “friends,” we risk substituting the real thing with something frictionless and easy.


And friction matters. It is in the awkward pauses, the misunderstandings, the inconvenient demands of real friendships that we learn to listen, to stretch, to become more human ourselves. If AI companions smooth all that away, they may comfort us — but they won’t help us grow.


There’s one thing where I part ways with Bloom: he’s clear that AI isn’t conscious, while babies are. I’m not so sure. To me, “consciousness” feels less like a hard problem waiting to be solved and more like a convenient illusion we carry around. Maybe neither babies nor AIs are “conscious” in the way we imagine — they’re just different bundles of perception and response. But that’s a rant for another day.

 
 
 

Recent Posts

See All
Is Artificial General Intelligence Already Here?

“Just five years ago, we didn’t have AGI; now we do.” That sentence appears not in a manifesto or a venture-capital blog post, but in an article published in Nature  on February 2, 2026. It is offered

 
 
 
Tiny People and the Death of Dualism

A recent article in BBC Future  describes an odd and surprisingly consistent experience. People across cultures and centuries eat a particular mushroom and report seeing the same thing. Not colours. N

 
 
 
The India We Stopped Singing About

There is only one book in my collection that I own in two editions: the Constitution of India. One is an older, cloth-bound volume in a large format, with every page laid out in English and Hindi on f

 
 
 

Comments


Let’s Build the Future of Learning

Whether you’re scaling innovation, strengthening educators, or rethinking strategy, I’d love to explore how we can work together.

 

© 2025 by Manas Chakrabarti

 

bottom of page