By Kashish Mehandiratta
In the United States alone, nearly 60 million adults – 23.08% of the population – experienced a mental illness in the past year. In addition to that, almost 13 million adults (5.04%) reported serious thoughts of suicide, signaling a significant and widespread mental health crisis, according to a Mental Health America report. Unfortunately, several barriers stand in the way of people receiving the care they need, and the report found that over half of adults with a mental health condition did not receive treatment.
Faced with high costs, long waitlists, and stigma, many people have turned to Artificial Intelligence for mental health support.
Why AI?
Many people have started using AI-based chatbots, such as ChatGPT, Woebot, Replika, and Wysa, for mental health support. There are several reasons for this, according to a KQED report. For one, these platforms are affordable (free or low-cost). They’re also available 24/7 without appointments, and they offer a space where people can feel heard, without the fear of judgment, labels, or a clinical diagnosis. Additionally, during challenging times, a chatbot might seem to provide instant relief or support.
But what kind of ‘therapy’ can AI chatbots provide?
Therapy chatbots, often known as AI therapists, use artificial intelligence to support mental health through therapeutic exercises and automated dialogues. Tools such as ChatGPT can remember past messages, react sympathetically, and offer general support without judgment. According to a systematic review published in Science Direct, consistent use can help minimize moderate anxiety and depression symptoms, enhance emotional regulation, and temporarily boost people’s moods, particularly over short periods, such as 4 to 8 weeks.
While these tools are not a replacement for real therapists, they can be a useful starting step for people seeking help. They are available at any time, provide immediate responses, and allow people to feel heard without stigma or cost constraints, per Science Direct. However, they are unable to diagnose mental illness, evaluate suicide risk, or modify treatment in light of complex human circumstances. They also come with risks, some of which were outlined in an article from the Conversation. These include concerns about privacy.
Dangers of AI Therapists
While AI therapy chatbots can offer accessibility and immediate support, several researchers have identified significant flaws in AI therapy. A study from Standard exposed significant and alarming risks by doing live experiments and found that many AI chatbots, like those on Character.ai or offered by platforms such as 7 Cups, often produce inappropriate, stigmatizing, or even dangerous responses when faced with mental health scenarios, including suicidal ideation. These systems, which are powered by algorithms designed to prioritize user happiness over clinical training, tend to validate people without question, even fostering delusions. Another study by Stanford discovered that both older and newer AI bots had pervasive prejudice and stigma against disorders like alcoholism and schizophrenia. These chatbots fail to meet fundamental therapeutic principles like empathy, accountability, and harm reduction because they cannot comprehend context, subtleties, or nonverbal indications. “It’s not clear AI bots could even meet the standard of a bad therapist,” said Stanford researcher Jared Moore. Although AI may appear to be empathetic on the surface, its lack of human judgment, ethical protections, and contextual awareness makes it potentially dangerous, particularly for the most vulnerable users.
Dr. Kashish Mehandiratta is a public health professional and trained dentist with expertise in clinical research, data analytics, and volunteer-driven community health initiatives. She is committed to advancing healthcare and equity through evidence-based, community-focused solutions.

