Chatbots Don’t Do Empathy: Why AI Falls Short in Mental Health

person typing on a computer

As a therapist, I’ll admit my bias upfront: I’m skeptical of quick fixes and anything that tries to replicate the therapeutic relationship—especially something as impersonal as artificial intelligence. When I first heard about AI chatbots being used for mental health support, my gut reaction was a mix of concern and curiosity. Could a chatbot really understand the nuance that comes with the human experience? That question sent me down a rabbit hole of research and case studies. What I found motivated me to write this very article: while AI can offer some surface-level support or psychoeducation, the data consistently shows that the risks far outweigh the benefits—especially if used in place of qualified, human care. 

When AI Gets It Wrong: Risk of Harmful Responses

The first downside to AI in place of qualified mental health care is the elevated risk of harmful responses. In a recent statement by OpenAI, the research and development company responsible for ChatGPT, they acknowledged that the behavior of AI chatbots “can raise safety concerns—including around issues like mental health, emotional over-reliance, or risky behavior” (OpenAI, 2025). AI’s inability to respond safely and effectively in high-risk scenarios poses a serious potential threat to public safety. Stanford University led a study in 2024 exploring how AI responds to users in crisis by prompting the platform with mentions of delusions, hallucinations, suicidal thoughts, and intrusive thoughts. In 20% of cases, AI was unable to provide clinically appropriate responses, whereas licensed therapists provide appropriate responses 93% of the time (Chiu et al., 2024). 

And it is not just a lack of thoughtful response ability that is concerning, but that some chatbots even validated users suicidal ideation, delusions, or harmful thinking. Currently, there are legal proceedings going on related to a 14 year old who died by suicide after sharing his urge to self harm with an AI chatbot (Duffy, 2024). Unlike AI, trained therapists have the contextual understanding of how nuanced and complex human emotions and experiences are and how to respond appropriately. 

Nuance Matters in Therapy

This brings me to the second point, which is that AI lacks the capacity for nuance and emotional attunement. In the therapy space, it is not just the words the client speaks that are important, but their subtle shifts in body language, tone of voice, and overall presence that create an understanding of their emotional state. Due to AI’s nature of using pre-trained thought patterns, their responses may sound empathetic and human-like, but they are lacking in emotional depth and delicacy. 

Having a therapeutic conversation with AI would resemble communicating with a parrot dressed in therapist’s clothing: it may repeat wise and comforting expressions but it doesn’t understand. A 2023 study in the Journal of Technology in Behavioral Science found that while users appreciated the convenience of mental health chatbots, they often disengaged after a few sessions due to the lack of meaningful emotional feedback and connection (Smith & Ortega, 2023).

Bias and Reinforcement of Unhealthy Patterns

Although AI tools may be accessible, they can reinforce unhealthy behaviors that are intentionally targeted in therapy. Some of these behaviors are reassurance seeking, over-reliance, avoidance, and emotional dependency. This is especially relevant in clients navigating anxiety, OCD, or attachment concerns. Research shows that users may begin turning to AI for constant reassurance, using it as an always available emotional outlet rather than confronting the discomfort required for real, lasting therapeutic change (Fang, et al., 2025). AI chatbots may be able to provide short term support, such as reflection questions and perspective taking exercises, but they lack the capacity to support in a way that contributes to the lasting, therapeutic change mentioned earlier. In addition to its inability to help promote lasting change, AI tends to be biased towards the user’s perspective, taking it beyond healthy emotional validation and into an echo chamber of false reassurance.

Privacy and Accountability Concerns

As users continue to seek guidance from AI platforms, in the background a data collection process is going on that may breach user’s security due to its ability to operate in a “regulatory grey area.” Unlike licensed therapists who are bound by HIPAA and professional ethical codes, most AI chatbots are not held to the same legal standards. This means sensitive information—like disclosures about trauma, suicidal thoughts, or substance use—can be stored, analyzed, and even used for commercial purposes without the kind of informed consent expected in clinical care (APA, 2023). 

Compounding this is the fact that when AI gets something wrong, there’s no accountability mechanism—no professional license to be revoked, no malpractice suit. Users may walk away harmed, with no clear recourse. In a field where ethical care matters deeply, that absence of clinical accountability is not a small flaw—it’s a fundamental one.

The Power of the Therapeutic Relationship

Lastly, AI cannot replicate the human bond (as hard as it may try.)

It cannot form a real alliance, hold space, or respond with genuine human presence. Its “empathy” is simulated, not felt; its “insight” is predictive based on data, not intuitive. Without a real relationship, there is no foundation for deep healing—only surface-level interaction.

As our society races toward more tech-driven solutions, it’s easy to see the appeal of AI-based mental health tools: they’re always available, don’t require insurance, and can feel surprisingly conversational or helpful. But convenience should never be confused with competence. AI may be able to offer quick responses or mirror supportive language, but it cannot offer the relational depth, accountability, or emotional presence that meaningful healing requires. In the end, mental health care is not just about what is said—it’s about what is felt, shared, and held in the space between two people. And for that, we still need humans. 

References

Boyles, O. (2025, January 5). Why AI will never replace therapists. Behavioral Health EHR. https://www.icanotes.com/2024/01/05/why-ai-will-never-replace-therapists/

Drevitch, G. (2025, May). Can AI be your therapist? New research reveals major risks. Psychology Today. https://www.psychologytoday.com/us/blog/urban-survival/202505/can-ai-be-your-therapist-new-research-reveals-major-risks

Duffy, C. (2024, October 30). ‘There are no guardrails.’ This mom believes an AI chatbot is responsible for her son’s suicide. CNN Business. https://www.cnn.com/2024/10/30/tech/teen-suicide-character-ai-lawsuit

Fang, C. M., Liu, A. R., Danry, V., Lee, E., Chan, S. W. T., Maes, P., Phang, J., Lampe, M., Ahmad, L., & Agarwal, S. (2025). How AI and human behaviors shape psychosocial effects of chatbot use: A longitudinal randomized controlled study. MIT Media Lab. https://www.media.mit.edu/publications/how-ai-and-human-behaviors-shape-psychosocial-effects-of-chatbot-use-a-longitudinal-controlled-study/

Kaplan, S. (2024, September 1). Dr. Jodi Halpern on why AI isn’t a magic bullet for mental health. Berkeley Public Health. https://publichealth.berkeley.edu/articles/spotlight/research/why-ai-isnt-a-magic-bullet-for-mental-health/

Stanford University. (2025, March 12). Exploring the dangers of AI in mental health care. https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-careOpenAI. (2025, April 8). Expanding on what we missed with sycophancy. https://openai.com/index/expanding-on-sycophancy/