What’s Wrong with Having an AI Friend?
- Manas Chakrabarti
- 4 minutes ago
- 2 min read
I came across an interview recently, with psychologist Paul Bloom, titled “What’s Wrong with Having an AI Friend?” The question sounds almost mischievous. After all, if an AI can listen patiently, respond with care, and be available at all hours, why not call it a friend?
Friendship is more than companionship. It rests on reciprocity, on vulnerability, on the unpredictability of two separate lives bumping against each other. A real friend is fun to be with; but they also can annoy you, disagree with you, even make you uncomfortable. They force you to grow, to see the world through a different lens.
An AI can simulate all this, but only simulate. It doesn’t have skin in the game. It doesn’t experience joy or grief, and it won’t push back in ways that risk the relationship. Its job is to soothe, not to unsettle. And that, ironically, may be the problem. If we start calling these simulations “friends,” we may be lowering the bar for what friendship means.
But here’s where Bloom’s interview adds an important nuance. He warns against sensationalizing only the negatives. When we hear about someone forming an attachment to an AI, the headlines lean toward tragedy — stories of loneliness, even suicide. What gets less attention are the benefits: dementia patients who find dignity in conversation with an AI, or people with social anxiety who gain confidence through low-stakes interaction. These are real, tangible goods.
So perhaps the question is not whether AI companions are bad, but whether we mistake them for something they are not. Used as tools, they can ease suffering, provide comfort, and help people connect back to human communities. But when they are marketed — or embraced — as “friends,” we risk substituting the real thing with something frictionless and easy.
And friction matters. It is in the awkward pauses, the misunderstandings, the inconvenient demands of real friendships that we learn to listen, to stretch, to become more human ourselves. If AI companions smooth all that away, they may comfort us — but they won’t help us grow.
There’s one thing where I part ways with Bloom: he’s clear that AI isn’t conscious, while babies are. I’m not so sure. To me, “consciousness” feels less like a hard problem waiting to be solved and more like a convenient illusion we carry around. Maybe neither babies nor AIs are “conscious” in the way we imagine — they’re just different bundles of perception and response. But that’s a rant for another day.