
Can AI offer the comfort of a therapist?
One evening, feeling overwhelmed, 24-year-old Delhi resident Nisha Popli typed, 'You're my psychiatrist now,' into ChatGPT. Since then, she's relied on the AI tool to process her thoughts and seek mental support.
'I started using it in late 2024, especially after I paused therapy due to costs. It's been a steady support for six months now,' says Popli.
Similarly, a 30-year-old Mumbai lawyer, who uses ChatGPT for various tasks like checking recipes and drafting emails, turned to it for emotional support. 'The insights and help were surprisingly valuable. I chose ChatGPT because it's already a part of my routine.'
With AI tools and apps available 24/7, many are turning to them for emotional support.
'More people are increasingly turning to AI tools for mental health support, tackling everything from general issues like dating and parenting to more specific concerns, such as sharing symptoms and seeking diagnoses,' says Dr Arti Shroff, a clinical psychologist.
But what drives individuals to explore AI-generated solutions for mental health?
WHY USERS ARE USING AI
Therapy is expensive
'As someone who values independence, I found therapy financially difficult to sustain,' shares Popli, adding, 'That's when I turned to ChatGPT.
by Taboola
by Taboola
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
Illinois: Gov Will Cover Your Cost To Install Solar If You Live In These Zips
SunValue
Learn More
Undo
I needed a safe, judgment-free space to talk, vent, and process my thoughts. Surprisingly, this AI offered just that — with warmth, logic, and empathy. It felt like a quiet hand to hold.'
People feel shy about in-person visits
Dr Santosh Bangar, senior consultant psychiatrist, says, 'Many people often feel shy or hesitant about seeking in-person therapy. As a result, they turn to AI tools to express their feelings and sorrows, finding it easier to open up to chatbots.
These tools are also useful in situations where accessing traditional therapy is difficult.'
Nobody to talk to
Kolkata-based Hena Ahmed, a user of the mental health app Headspace, says she started using it after experiencing loneliness. 'I've been using Headspace for about a month now. The AI tool in the app helps me with personalised suggestions on which mindfulness practices I should follow and which calming techniques can help me overcome my loneliness.
I was feeling quite alone after undergoing surgery recently and extremely stressed while trying to manage everything.
It was responsive and, to a certain extent, quite helpful,' she shares.
Users see changes in themselves
Mumbai-based 30-year-old corporate lawyer says, 'ChatGPT offers quick solutions and acts as a reliable sounding board for my concerns. I appreciate the voice feature for instant responses. It helps create mental health plans, provides scenarios, and suggests approaches for tackling challenges effectively.'
'My panic attacks have become rare, my overthinking has reduced, and emotionally, I feel more grounded. AI didn't fix me, but it walked with me through tough days—and that's healing in itself,' expresses Popli.
CAN AI REPLACE A THERAPIST?
Dr Arti expresses, 'AI cannot replace a therapist. Often, AI can lead to incorrect diagnoses since it lacks the ability to assess you in person. In-person interactions provide valuable non-verbal cues that help therapists understand a person's personality and traits.'
Echoing similar thoughts, Dr Santosh Bangar, senior consultant psychiatrist, says, 'AI can support mental health by offering helpful tools, but it shouldn't replace a therapist. Chatbots can aid healing, but for serious issues like depression, anxiety, or panic attacks, professional guidance remains essential for safe and effective treatment.'
DO CHATBOTS EXPERIENCE STRESS?
Researchers found that AI chatbots like ChatGPT-4 can show signs of stress, or 'state anxiety', when responding to trauma-related prompts.
Using a recognised psychological tool, they measured how emotionally charged language affects AI, raising ethical questions about its design, especially for use in mental health settings.
In another development, researchers at Dartmouth College are working to legitimise the use of AI in mental health care through Therabot, a chatbot designed to provide safe and reliable therapy. Early trials show positive results, with further studies planned to compare its performance with traditional therapy, highlighting AI's growing potential to support mental wellbeing.
ARE USERS CONCERNED ABOUT DATA PRIVACY?
While some users are reluctant to check whether the data they share during chats is secure, others cautiously approach it. Ahmed says she hasn't considered privacy: 'I haven't looked into the data security part, though. Moving forward, I'd like to check the terms and policies related to it.'
In contrast, another user, Nisha, shares: 'I don't share sensitive identity data, and I'm cautious.
I'd love to see more transparency in how AI tools safeguard emotional data.'
The Mumbai-based lawyer adds, 'Aside from ChatGPT, we share data across other platforms. Our data is already prevalent online, whether through social media or email, so it doesn't concern me significantly.'
Experts say most people aren't fully aware of security risks. There's a gap between what users assume is private and what these tools do.
Pratim Mukherjee, senior director of engineering at McAfee, explains, 'Many mental health AI apps collect more than what you type—they track patterns, tone, usage, and emotional responses. This data may not stay private. Depending on the terms, your chat history could help train future versions or be shared externally.
These tools may feel personal, but they gather data.'
Even when users feel anonymous, these tools collect data like IP addresses, device type, and usage patterns. They store messages and uploads, which, when combined, can reveal personal patterns. This data can be used to create profiles for targeted content, ads, or even scams
Pratim Mukherjee, senior director of engineering, McAfee
Tips for protecting privacy with AI tools/apps
- Understand the data the app collects and how it's used
- Look for a clear privacy policy, opt-out options, and data deletion features
- Avoid sharing location data or limit it to app usage only
- Read reviews, check the developer, and avoid apps with vague promises
What to watch for in mental health AI apps
- Lack of transparency in data collection, storage, or sharing practices
- Inability to delete your data
- Requests for unnecessary permissions
- Absence of independent security checks
- Lack of clear information on how sensitive mental health data is used
One step to a healthier you—join Times Health+ Yoga and feel the change

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Economic Times
19 minutes ago
- Economic Times
Is ChatGPT making us dumb? MIT brain scans reveal alarming truth about AI's impact on the human mind
MIT researchers have discovered that using ChatGPT for essay writing reduces brain engagement and learning over time. Through EEG brain scans of 54 students, those who relied on AI performed worse than others across neural and linguistic metrics. The study raises concerns that AI tools may hinder critical thinking and promote passive acceptance of algorithm-driven content. A new MIT study warns that regular use of ChatGPT could impair memory, brain activity, and critical thinking. Students relying on the AI tool showed significantly lower cognitive engagement than peers using Google or no tools at all. (Representational image: iStock) Tired of too many ads? Remove Ads Brain vs Bot: How the Study Was Done Google Wasn't Great, But Still Better Than ChatGPT Tired of too many ads? Remove Ads A Shortcut with a Hidden Toll What This Means for the AI Generation It's quick, it's clever, and it answers almost everything—no wonder millions around the world rely on ChatGPT . But could this digital genie be dulling our minds with every wish we make? According to a startling new study by scientists at MIT's Media Lab, the answer may be yes. Researchers have now found that excessive use of AI tools like ChatGPT could be quietly eroding your memory, critical thinking, and even your brain on arXiv, the study titled 'The Cognitive Cost of Using LLMs' explores how language models—especially ChatGPT—affect the brain's ability to think, learn, and retain examine what they call the 'cognitive cost' of using large language models (LLMs), MIT researchers tracked 54 students over a four-month period using electroencephalography (EEG) devices to monitor brain activity. The participants were divided into three groups: one used ChatGPT, another relied on Google, and the last used no external help at all—dubbed the 'Brain-only' the AI-powered group initially showed faster results, the long-term findings were more sobering. Students who depended on ChatGPT for essay writing exhibited poorer memory retention, reduced brain engagement, and lower scoring compared to their peers. As the researchers noted, 'The LLM group's participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, and scoring.'Interestingly, students who used Google showed moderate brain activity and generated more thoughtful content than those who leaned on ChatGPT. Meanwhile, those in the Brain-only group had the highest levels of cognitive engagement, producing original ideas and deeper insights. In fact, even when ChatGPT users later attempted to write without assistance, their brain activity remained subdued—unlike the other groups who showed increased engagement while adapting to new suggests that habitual ChatGPT usage might not just affect how we think, but whether we think at study also points to how this over-reliance on AI encourages mental passivity. While ChatGPT users reported reduced friction in accessing information, this convenience came at a cost. As the researchers explained, 'This convenience came at a cognitive cost, diminishing users' inclination to critically evaluate the LLM's output or 'opinions'.'The team also raised red flags about algorithmic bias : what appears as top-ranked content from an AI is often a result of shareholder-driven training data, not necessarily truth or value. This creates a more sophisticated version of the 'echo chamber,' where your thoughts are subtly shaped—not by your own reasoning, but by an AI's probabilistic AI tools become more embedded in our everyday tasks—from writing emails to crafting essays—this study is a wake-up call for students, educators, and professionals. While tools like ChatGPT are powerful assistants, they should not become cognitive researchers caution that as language models continue to evolve, users must remain alert to their potential mental side effects. In a world where convenience is king, critical thinking might just be the first casualty.


Hans India
40 minutes ago
- Hans India
OpenAI Scrubs Jony Ive Deal from Web Amid Trademark Battle with Rival AI Firm
In a sudden turn of events, OpenAI has removed all public references to its much-discussed $6.5 billion acquisition of Jony Ive's AI hardware startup, IO Products. The move follows a legal challenge over trademark infringement by a competing firm with a similar name, stirring speculation and concern across the tech community. Over the weekend, social media users noticed the disappearance of promotional content and the official website linked to the IO Products deal. Both OpenAI and representatives for Jony Ive have since confirmed that the takedown was prompted by an ongoing trademark dispute with IYO Inc., a company also operating in the AI device space. According to Bloomberg, the case is now under judicial review. Responding to the controversy, a spokesperson for Ive called the legal complaint 'an utterly baseless complaint and we'll fight it vigorously.' The IO Products deal, officially unveiled in May, marked a pivotal shift for OpenAI—best known for its generative AI software. The partnership with Ive, the legendary former Apple designer behind iconic devices like the iPhone and iMac, aimed to bring OpenAI's first AI hardware device to market within a year. At the time of the announcement, OpenAI CEO Sam Altman had predicted that the company could sell up to 100 million units of the new hardware. He even suggested the acquisition could potentially boost OpenAI's valuation by $1 trillion, stating, 'This is the biggest thing we've ever done as a company.' In a now-deleted blog post, Altman and Ive had jointly written, 'It became clear that our ambitions to develop, engineer and manufacture a new family of products demanded an entirely new company.' OpenAI's financial commitment to the deal included $5 billion in equity, in addition to its existing 23% stake in IO from an earlier collaboration. IO Products was incorporated in Delaware in September 2023 and later registered in California in April 2025, according to public filings. Heading the new hardware division is OpenAI executive Peter Welinder, known for pioneering work in robotics and innovative product development. The team includes experts across design, hardware, software, and manufacturing, collaborating closely with OpenAI's San Francisco-based engineering and research teams. Altman and Ive's collaboration dates back to early 2023, though the public got its first hint of an 'AI-first device' in February 2025. While specific details remain under wraps, industry watchers speculate the product could rival the Humane AI Pin or Rabbit R1, or even explore futuristic formats like smart glasses, in-car systems, or AI-integrated robots. Reflecting on the collaboration, Ive shared, 'I have a growing sense that everything I have learned over the last 30 years has led me to this moment. I am so grateful for the opportunity to be part of such an important collaboration.' Altman added his admiration for Ive's approach, saying, 'AI is an incredible technology, but great tools require work at the intersection of technology, design, and understanding people and the world. No one can do this like Jony and his team; the amount of care they put into every aspect of the process is extraordinary.' As legal proceedings continue, OpenAI's ambitious foray into hardware hangs in the balance, with stakeholders closely watching how the trademark challenge might affect the future of this high-stakes collaboration.


Time of India
an hour ago
- Time of India
If World War 3 happens..., Thyrocare founder voices fear over US participation in Israel-Iran conflict
At a time when global tensions are escalating and whispers of World War 3 seem louder than ever, Thyrocare founder Dr A. Velumani has stepped forward with a sobering and deeply personal reflection. In a world driven by power plays and political one-upmanship, his words cut through the noise, calling out a universal truth: ego, not just weapons, could be the real cause of humanity's downfall. Taking to X, Dr Velumani shared his thoughts against the backdrop of the ongoing Iran-Israel conflict . Drawing from decades of personal and professional experience, he remarked that over the last 50 years, he has watched careers, families, and businesses fall prey to the egos of elders—and now sees that same destructive force playing out between nations. Referring to recent geopolitical developments, he said the last 15 days have confirmed a harsh reality: ego continues to triumph over humanity. If a third world war were to break out, he warned, the aftermath would be unimaginably painful for all of humanity. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Buon Blech: Unsold Furniture Liquidation 2024 (Prices May Surprise You) Unsold Furniture | Search Ads Learn More Undo His most chilling observation came with a stark reminder of the human cost of war—not in statistics, but in emotion: the idea of "someone else raising your children" is, in his words, the most painful reality for a so-called civilised society. He urged warring nations to drop their egos and choose peace before it's too late.