The Cautious Optimist: AI, Mental Health, and the Question of Reimbursement
If you’ve been following along, you may have noticed that some of my recent posts have had a more critical tone. That’s not accidental. It reflects a shift—not away from enthusiasm for innovation, but toward a deeper, more grounded kind of engagement with it. I’ve spent most of my career focused on mental health, human resilience, and the systems that support them. As AI becomes more integrated into the field, I find myself both inspired and concerned.
So let me be clear: I am still a cautious optimist. I believe AI has tremendous potential to improve mental health care—especially in terms of access, personalization, and scalability. But I also believe that optimism, if it’s to be useful, has to come with a healthy dose of critical reflection.
One of the biggest questions I’m wrestling with right now is about insurance reimbursement and how it will shape the future of care.
Will AI Therapists Be Reimbursed Differently Than Human Therapists?
As AI models continue to improve, we will almost certainly see data showing that AI-supported mental health interventions are effective. Perhaps even as effective as interventions delivered by a human therapist—at least when measured by certain outcomes like symptom reduction, goal attainment, or user satisfaction.
And yet, I find myself coming back to that old saying:
There are lies, damn lies, and statistics.
I don’t mean that cynically. I value data. I rely on it. But I also know that not all outcomes are easy to measure—and not all metrics tell the whole story. Mental health treatment isn’t just about reducing symptoms. It’s also about building trust, developing insight, and feeling genuinely seen and heard by another person.
There’s something deeply human—and profoundly healing—about sitting across from someone who listens, reflects, and connects. This experience doesn’t always show up in the data, but we know it matters. Anyone who has worked in this field, or sought therapy themselves, understands that the therapeutic relationship is often the most important agent of change.
So here’s my concern: What happens when insurance companies start using data to justify lower reimbursement rates for AI-based services? On paper, it might make sense. If an AI app reduces symptoms of depression at the same rate as a licensed therapist, why should the insurer pay more for the human?
But this line of thinking could have dangerous consequences—especially for the people who are most vulnerable.
The Risk of Two-Tiered Care
Let’s imagine a future where insurance companies begin reimbursing AI-supported mental health care at a lower rate, or where AI services become the default option for basic plans. In that world, only those with the financial means to pay out-of-pocket—or who have premium insurance coverage—would consistently have access to human therapists.
We would have created a two-tiered system, where some people get human connection, and others get an app.
To be clear: AI tools can absolutely be helpful. I’ve been experimenting with them myself and have seen how they can extend access, reduce waitlists, and provide immediate support in moments of need. In many cases, some care is better than no care. And for certain tasks—like mood tracking, cognitive restructuring prompts, or even guided meditations—AI may actually outperform humans in terms of efficiency and consistency.
But when AI becomes not just an option but the only option for some groups of people, that’s no longer innovation. That’s inequity.
The Intangibles Still Matter
AI is here to stay. And it should be. But as we move forward, we need to hold on tightly to what makes mental health care truly healing: the relationship, the context, the story behind the symptoms. These aren’t easily codified, and they won’t always show up in outcome dashboards or cost-benefit analyses.
The decisions that insurance companies make in the next few years will shape the landscape of care for decades. If we let data alone drive those decisions, we risk reducing mental health care to a transaction—something to be optimized, automated, and scaled.
Instead, we need a more nuanced approach. One that acknowledges the value of AI without minimizing the irreplaceable power of human connection. Reimbursement policies should reflect the full picture—not just what’s measurable, but what’s meaningful.
A Path Forward
I remain hopeful that we can find a balance. That we can design systems where AI enhances human care, not replaces it. That we can build ethical frameworks that ensure equity, choice, and dignity in mental health treatment. But that won’t happen by default. It will require advocacy, oversight, and a willingness to say: “Just because we can doesn’t mean we should.”
So yes—I’m still an optimist. But I’m watching closely. And I hope you are, too.