logo
‘Won't get annoyed, won't snap': Indonesians tap AI for judgement-free emotional support, but risks abound

‘Won't get annoyed, won't snap': Indonesians tap AI for judgement-free emotional support, but risks abound

CNA3 days ago

JAKARTA: Ahead of an extended family gathering, Nirmala (not her real name) found herself unusually anxious.
The reason: Small talk that could spiral into interrogation.
'Sometimes I just don't know how to answer questions from relatives, and that stresses me out,' said Nirmala, 39, who asked to remain anonymous.
In contrast, the generative artificial intelligence platform ChatGPT has been nothing but a source of comfort ever since Nirmala began using it as a sounding board last October.
'It's not that I don't have anyone to talk to,' Nirmala told CNA Indonesia. 'But when I bring up things that people think are trivial, I'm often told I'm being dramatic. So I talk to AI instead – at least it listens without throwing judgement.'
Like Nirmala, overseas student Ila (not her real name) has turned to AI-driven chatbots for advice.
Ila, 35, first turned to ChatGPT in April 2023 when she was preparing to move abroad for further studies. She later began also using Chinese AI platform DeepSeek.
At first, Ila – who also requested anonymity – used the platforms for practical information about university life and daily routines in her host country, which she declined to reveal.
'Before leaving for school, I had a ton of questions about life abroad, especially since I had to bring my children with me. AI became one of the ways I could gain perspective, aside from talking directly with people who'd already been through it,' she said.
The platforms' replies put her at such ease that in October last year, she began sharing her personal issues with the chatbots.
NO JUDGEMENT FROM CHATBOTS
AI chatbots have taken the world by storm in recent years and more people are turning to them for mental health issues.
Indonesia is no different. An online survey in April by branding and data firm Snapcart found that 6 per cent of 3,611 respondents there are using AI "as a friend to talk to and share feelings with". Nearly six in 10 (58 per cent) of respondents who gave this answer said they would sometimes consider AI as a replacement for psychologists.
People in Southeast Asia's largest economy are not necessarily turning to AI chatbots because they lack human friends, but because AI is available 24/7 and "listens" without judgement, users and observers told CNA Indonesia.
The tool, they said, is especially handy in a country with a relatively low number of psychologists.
According to the Indonesian Clinical Psychologists Association, the country has 4,004 certified clinical psychologists, of whom 3,084 are actively practising.
With a population of about 280 million people, this translates to about 1.43 certified clinical psychologists per 100,000 population.
In comparison, neighbouring Singapore has 9.7 psychologists per 100,000 population – a ratio that is already lower than in other Organisation for Economic Cooperation and Development nations.
The potential benefits of using AI in mental health are clear, experts said, even as risks and the need for regulation exist.
The rise of AI as a trusted outlet for emotional expression is closely tied to people's increasingly digital lives, said clinical psychologist Catarina Asthi Dwi Jayanti from Santosha Mental Health Centre in Bandung.
AI conversations can feel more intuitive for those who grew up with texting and screens, she said, adding that at least a dozen clients have told her they have consulted AI.
"For some people, writing is a way to organise their thoughts. AI provides that space, without the fear of being judged," she said.
Conversing with ChatGPT is a safe way of rehearsing her thoughts before opening up to somebody close to her, Nirmala said. "Honestly it doesn't feel like I'm talking to a machine. It feels like a conversation with someone who gets me," she said.
AI chatbots offer accessibility, anonymity, and speed, said telecommunications expert Heru Sutadi, executive director of the Indonesia ICT Institute.
AI platforms, he said, are "programmed to be neutral and non-critical".
"That's why users often feel more accepted, even if the responses aren't always deeply insightful," he said.
Unlike a session with a psychologist, "you can access AI 24/7, often at little to no cost", Heru said. "Users can share as much as they want without the pressure of social expectations. And best of all, AI replies instantly."
In Indonesia, an in-person session with a private psychologist can cost upwards of 350,000 rupiah (US$21.50).
Popular telemedicine platform Halodoc offers psychiatrist consultations at prices starting from 70,000 rupiah, while mental health app Riliv offers online sessions with a psychologist at prices starting from 50,000 rupiah.
Another advantage of a chatbot, said Ila, is that it "won't get annoyed, won't snap, won't have feelings about me bombarding it with a dozen questions".
"That's not the case when you're talking to a real person," she added.
As such, AI can serve as a "first safe zone" before someone seeks professional help, especially when dealing with topics such as sexuality, religion, trauma or family conflict, said Catarina.
"The anonymity of the internet, and the comfort that comes with it, allows young people to open up without the fear of shame or social stigma," she explained.
Some of her clients, she added, turned to AI because they "felt free to share without worrying what others, including psychologists, might think of them, especially if they feared being labelled as strange or overly emotional."
RISKS AND IMPACT ON REAL-LIFE RELATIONSHIPS
But mental health professionals are just as wary of the risks posed by AI chatbots, citing issues such as privacy, regulation of the technology and their impact on users' real-life interactions with others.
The machines can offer a false sense of comfort, Heru said. "The perceived empathy and safety can be misleading. Users might think AI is capable of human warmth when, in reality, it's just an algorithm mimicking patterns."
Another major concern is data privacy, Heru said. Conversations with AI are stored on company servers and if cyber breaches occur, "sensitive data could be leaked, misused for targeted advertising, profiling, or even sold to third parties".
For its part, Open AI, ChatGPT's parent company, has said: "We do not actively collect personal information to train our models, do not use public internet data to profile individuals, target advertising, or sell user data."
Indonesia released a National Strategy for Artificial Intelligence in 2020, but the document is non-binding. AI is currently governed loosely under the 2008 Electronic Information and Transactions (ITE) Law and the 2022 Personal Data Protection Law, both of which touch on AI but lack specificity.
A Code of Ethics for AI was issued by the Ministry of Communication and Digital Affairs in 2023, but its guidelines remain vague.
In January this year, Communication and Digital Affairs Minister Meutya Hafid announced comprehensive AI regulations would be rolled out.
Studies are also emerging on the impact of chatbot usage on users' real-life social interactions.
In a 2024 study involving 496 users of the chatbot Replika, researchers from China found that greater use of AI chatbots, and satisfaction with them, could negatively affect a person's real-life interpersonal skills and relationships.
Child and adolescent clinical psychologist Lydia Agnes Gultom from Klinik Utama dr. Indrajana said AI-based relationships are inherently one-sided. Such interactions could hinder people's abilities to empathise, resolve conflicts, assert themselves, negotiate or collaborate, she said.
"In the long run, this reduces exposure to genuine social interaction," said Agnes.
In other countries, experts have highlighted the need for guardrails on the use of AI chatbots for mental health.
As these platforms tend to align with and reinforce users' views, they may fail to challenge dangerous beliefs and could potentially drive vulnerable individuals to self-harm, the American Psychological Association told US regulators earlier this year.
Safety features introduced by some companies, such as disclaimers that the chatbots are not "real people", are also inadequate, the experts said.
AI can complement the work of mental health professionals, experts told CNA Indonesia.
It can offer initial emotional support and a space for humans to share and explore their feelings with the right prompts, said Catarina of Santosha Mental Health Centre.
But when it comes to diagnosis and grasping the complexity of human emotions, AI still falls short, she said. "It lacks interview (skills), observation and a battery of assessment tools."
AI cannot provide proper intervention in emergency situations such as suicide ideation, panic attacks or abuse, said Agnes of Klinik Utama dr. Indrajana, a healthcare clinic in Jakarta.
Therapeutic relationships rooted in trust, empathy, and nonverbal communication can only happen between humans, she added.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Apple sued by shareholders for allegedly overstating AI progress
Apple sued by shareholders for allegedly overstating AI progress

CNA

time6 hours ago

  • CNA

Apple sued by shareholders for allegedly overstating AI progress

Apple was sued on Friday by shareholders in a proposed securities fraud class action that accused it of downplaying how long it needed to integrate advanced artificial intelligence into its Siri voice assistant, hurting iPhone sales and its stock price. The complaint covers shareholders who suffered potentially hundreds of billions of dollars of losses in the year ending June 9, when Apple introduced several features and aesthetic improvements for its products but kept AI changes modest. Apple did not immediately respond to requests for comment. CEO Tim Cook, Chief Financial Officer Kevan Parekh and former CFO Luca Maestri are also defendants in the lawsuit filed in San Francisco federal court. Shareholders led by Eric Tucker said that at its June 2024 Worldwide Developers Conference, Apple led them to believe AI would be a key driver of iPhone 16 devices, when it launched Apple Intelligence to make Siri more powerful and user-friendly. But they said the Cupertino, California-based company lacked a functional prototype of AI-based Siri features, and could not reasonably believe the features would ever be ready for iPhone 16s. Shareholders said the truth began to emerge on March 7 when Apple delayed some Siri upgrades to 2026, and continued through this year's Worldwide Developers Conference on June 9 when Apple's assessment of its AI progress disappointed analysts. Apple shares have lost nearly one-fourth of their value since their December 26, 2024 record high, wiping out approximately $900 billion of market value. The case is Tucker v. Apple Inc et al, U.S. District Court, Northern District of California, No. 25-05197.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store