AI and Mental Health Support: A Conflict of Hope and Hesitation
I’ve been feeling conflicted lately about the role of AI in mental health support. On the surface, it seems like an incredible innovation — a way to increase access to care in places where traditional mental health resources are limited or nonexistent. AI tools are relatively low-cost, available around the clock, and able to support millions of people at once. That kind of reach just isn’t possible with human providers alone.
One example that stands out is the AI chatbot company Elomia, which received a grant to have its chatbot translated into Ukrainian to support people impacted by the war. In a region where trauma is widespread and access to providers is severely limited, something like this could make a real difference. It’s hard not to feel hopeful about the possibilities when you think about communities where people are suffering and don’t have anyone to turn to. Could AI help bridge that gap.
And yet, there’s another side to this. A quieter concern that’s hard to shake.
It’s not wrong to say that many of society’s struggles today — loneliness, disconnection, even some forms of anxiety and depression — may stem from a growing distance between people. We text instead of talk. We scroll instead of connect. And now, even therapy — one of the most deeply human, intimate forms of support — might be handed over to a machine. There’s something about that that doesn’t sit right with me.
Therapy is built on trust, presence, and vulnerability. A therapist listens not just to words but to silences. They notice the moment someone looks down, hesitates, or fidgets with their hands. They offer warmth, attunement, and a feeling of being truly seen — things that, at least right now, no chatbot can authentically replicate.
But maybe this isn’t an either/or question.
That’s why I find myself increasingly optimistic about the idea of therapy augmented by AI. A hybrid approach that combines the empathy and intuition of a human therapist with the powerful, data-driven tools of artificial intelligence. This feels like the best of both worlds: expanding access without replacing the human element. And the truth is, it’s already happening. People are turning to AI systems for support — and not just for surface-level concerns. They’re talking about trauma, depression, anxiety, and fears they may have never shared with anyone else. Sometimes, the anonymity and lack of perceived judgment make it easier for them to open up.
But that raises another set of questions. Does the chatbot give them the support they truly need — or does it become a crutch that helps them avoid real human connection? Do we run the risk of replacing long-term healing with short-term relief?
These are the kinds of questions we need to ask now — not after these tools are fully embedded in the way we approach mental health.
AI in mental health is not inherently good or bad. Like any tool, its value lies in how we use it. If we can approach this thoughtfully, ethically, and with a deep respect for the complexities of the human experience, then maybe — just maybe — AI can help us build a future that’s not less human, but more so.