As mental health issues among Singapore’s youth become more pronounced, many are seeking support from AI chatbots, such as ChatGPT, rather than traditional therapy. This shift comes after Singapore fell to 34th in global happiness rankings, overtaken by Taiwan. Young people have turned to these AI platforms as a more affordable alternative to therapy, with many finding them a cost-effective option in the face of rising mental health concerns.
For 25-year-old Jessica, who requested anonymity, ChatGPT has become a tool to manage her anxiety, heartbreak, and career struggles. She describes the chatbot as a space where she can “trauma dump” without the fear of being judged. “I know ChatGPT is an AI bot, but it makes me feel heard and validates my feelings more than any human does sometimes,” she said. As someone who was between jobs, Jessica found the chatbot’s rational responses grounding, often feeling more reassured by it than by late-night conversations with friends.
Jessica also emphasized the affordability of AI-based support. With therapy sessions in Singapore costing between S$80 and S$300, she viewed ChatGPT as a far cheaper alternative.
Similarly, 32-year-old Vanessa, who also asked for anonymity, used ChatGPT as an objective sounding board rather than seeking emotional support. “I didn’t think I was looking for emotional support specifically, but I needed a neutral space to organize my thoughts,” she explained. Vanessa combined the chatbot’s support with in-person therapy, which she acknowledged had “no substitute.”
While some, like Jessica, rely on AI for guidance akin to therapy, experts note that AI chatbots are not replacements for professional mental health care. John Lim, chief well-being officer at the Singapore Counselling Centre, pointed out that AI offers a “non-judgemental” platform for users to express themselves, often serving as a precursor to therapy or a supplement for ongoing treatment. “For many, using AI chatbots is a gentle introduction to what support might look like,” he said.
The widespread use of AI chatbots comes in response to rising mental health challenges. A 2024 survey by Singapore’s Institute of Mental Health revealed that nearly one in three young people aged 15 to 35 showed signs of depression, anxiety, or stress. Around 25% of those surveyed experienced severe anxiety symptoms in the week prior to the survey.
In an effort to address this crisis, Singapore introduced Wysa in 2020, a mental health chatbot designed to offer meditation exercises, breathing techniques, and motivational talks. Pranav Gupta, global head of commercial and partnerships at Wysa, reported that over 90% of users felt emotional relief and were better able to manage their thoughts. He noted that many sought help for work and relationship issues exacerbated by the pandemic.
Despite the benefits, experts caution that AI chatbots, while offering accessible support, fall short when it comes to providing deep emotional healing. Trauma recovery therapist Nur Adam from The Good Life Counselling explained that the use of AI chatbots reflects a larger “emotional and mental health crisis.” “People are struggling to access care, so they turn to what’s easily available,” she said. “In most cases, it allows them to express their feelings and receive feedback, but that doesn’t replace true therapeutic support.”
Lim also warned about potential risks associated with over-reliance on AI chatbots, including the possibility of inaccurate advice and the lack of accountability, which could lead to harmful suggestions or failure to recognize urgent situations.
Gupta clarified that AI is not meant to replace therapy but to complement it. “AI will never replace human therapists. It serves as an essential complement—filling gaps, offering support between sessions, and reaching people who might not otherwise seek help,” he said. Looking ahead, he believes the future of mental healthcare will involve collaboration between AI and human therapists.
In response to the growing reliance on AI for mental health support, one netizen humorously summed up the situation: “Computer programmed to say ‘You’re fine.’ So you’re fine. According to the way the AI has been programmed… you’d better be fine, or else.”
Related topics