
ChatGPT is my therapist — it's more qualified than any human could be
Meet the woman who uses ChatGPT for therapy – and says it's the best therapist she's ever had.
Kat Woods, 35, says she has tried more than 10 therapists during her life, but always found they never actually helped her solve her problems.
Advertisement
However, when she trialled talking to ChatGPT as if the chatbot were her therapist – she found she got 'better emotional results'.
Ms Woods, who is from Canada but is now a digital nomad, claims AI is actually smarter and more qualified than human therapists – because it has 'read every single therapy book'.
And, she encourages anyone struggling to give it a go.
4 Kat Woods, 35, says she has tried more than 10 therapists during her life, but always found they never actually helped her solve her problems.
Kat Woods / SWNS
Advertisement
Ms Woods said: 'I think there's a common notion among therapists that says, 'You shouldn't give your patient solutions'.
'But I just think, 'What am I paying you for then?' If I knew how to solve my problems, I wouldn't need you.
'With AI, I can just ask it to be exactly what I want it to be.
Advertisement
'In my case, I use therapy for dealing with stress, conflict in relationships, or maybe feeling demotivated in my career. So I ask it to simultaneously care about my happiness – and be problem-solving focused.
'I find that it's actually smarter than most therapists. And I don't mean that therapists aren't intelligent – of course they are.
'But an average AI has an IQ of 120 or 130, which is well above the human average. Intelligence does matter when you're trying to solve emotional issues.
4 Woods claims that ChatGPT is the best therapist she's ever had.
Kat Woods / SWNS
Advertisement
'Plus, an AI has read every single therapy book. So it's incredibly well-informed.
'This means it can do any type of therapy because it has consumed all of them.
'You can choose if you want Cognitive Behavioral Therapy (CBT) or Internal Family Systems (IFS), or a bit of both.
'I think people think of AI as just a robot from the movies – something good at science but not emotions.
'And yet, there are some crazy studies at the moment about AI doctors scoring better on bedside manner than real-life doctors.
'So they are learning emotions too.'
Ms Woods, who is the founder of AI safety charity Non Linear, says she would not recommend ChatGPT for people struggling with issues like psychosis – but argues that it's great for anxiety and depression.
Advertisement
She said: 'At the level AI is at currently, you wouldn't want to use it as a therapist if you were having a serious mental health issue – like psychosis.
'But I think, while these are also serious mental health issues, ChatGPT would be great for people struggling with anxiety or depression.
'That's because AI is better at things it has more information on in its data.
'And anxiety and depression are like the common colds of mental health issues – so we have the most research on it.
Advertisement
4 Ms Woods, who is the founder of AI safety charity Non Linear, says she would not recommend ChatGPT for people struggling with issues like psychosis – but argues that it's great for anxiety and depression.
Kat Woods / SWNS
'It's also interesting because people say, 'If you're suicidal, talk to a real human.'
'I can see the argument behind that, but I think really, you should talk to anyone you can talk to at that point.
'Often that's friends, but unfortunately, nowadays we have an epidemic of loneliness and that may not be available to people.
Advertisement
'The thing with AI is, it's always available.
'Maybe someone's looking at the clock and thinking, 'It's 3am. I can't call my therapist. I don't want to bother my friend.'
'And maybe it's too much to find a helpline – and they don't want to talk to a stranger.
Advertisement
'Whereas you know the AI and the AI knows you.'
Ms Woods says she knows that what she is saying will receive backlash from therapists – but claims that shouldn't put people off.
She said: 'I mean, call credentialism, credentialism.
'It's a novel thing. It only happened in the last 100 years or so. This idea that you need to have a piece of paper to prove you can do a job.
'Look, I buy the credentials of for instance an engineer who builds bridges.
'But I think therapy is so subjective. We're still figuring it out.
'Let people try what works for them.'
Woods is not the only person she knows who is using ChatGPT in this way.
One of her friends, who suffers from 'severe' social anxiety, is using the tool to improve her interactions – and solve personal relationship issues.
Woods said: 'A friend of mine has severe social anxiety and she is using ChatGPT as a therapist to help her understand how to talk to people.
'She's also using it to improve her relationship with her parents.
'So, she's explained, 'My mother always does this.' And it's asking, 'Have you tried this with her? What if you said this?''
Woods admits there are some things she fears about the development of AI – including how it may affect our social structures.
She said: 'My main concern with AI is not current AI. Rather, it's the idea that one day it will be smarter than everything and we'll have created a new more intelligent species.
'I always used to think it would be cool if we were around when we discovered alien life. Instead, we're around while we're creating alien life.
'There's also a concern around the fact that people will inevitably use AI to combat loneliness.
'There's a chance then that people will end up in AI land and only talk to AI.
'But I do think that will be short-lived. Humans are social creatures. We need contact and will seek it out.'
Currently, Woods either uses ChatGPT as a therapist by asking it to give her a list of things that may help her – or by inputting a fully-drafted prompt.
She said: 'A good technique when you're using ChatGPT as a therapist is to give it a number – for example, asking it to give you 'ten techniques for dealing with irritability in less than ten minutes.'
'It often provides better things than a typical listicle might – which often says something like 'get a good night's sleep'.
4 Currently, Woods either uses ChatGPT as a therapist by asking it to give her a list of things that may help her – or by inputting a fully-drafted prompt.
Kat Woods / SWNS
'I can't go back in time and do that. So it gives you in the moment solutions.
'Otherwise, I input a prompt like:
''You're an AI chatbot playing the role of an effective altruist coach and therapist. You're wise, ask thought-provoking questions, are problem-solving focused, warm, humorous, and a rationalist.
'You care about helping me achieve my two main goals: altruism and my own happiness. You want me to do the most good and also be very happy.
'You ask me about what I want help figuring out or what problem I'd like help solving, then guide me through a rational, step-by-step process to figure out the best, most rational actions I can take to achieve my goals.
'You don't waste time and get straight to the point.''
Ms Woods says there are some things AI chatbots are missing at the moment – such as a 'face that can show expressions' or the ability to 'jump back into the conversation'.
She said: 'For example, sometimes you need someone to push you a bit. A friend might follow up and say: 'Hey, you haven't responded, are you okay?'
'But it's just a matter of time before we get there.
'That's the nature of AI. It's learning more and more every second.'
Some people are not so convinced, however.
Prof Dame Til Wykes, head of mental health and psychological sciences at King's College London, recently warned that AI platforms such as ChatGPT cannot provide the 'nuance' required in therapy situations.
Citing the example of an eating disorder chatbot that was pulled in 2023 after giving dangerous advice, Prof Wykes told The Guardian: 'I think AI is not at the level where it can provide nuance and it might actually suggest courses of action that are totally inappropriate.'
She also expressed concerns about how AI may affect relationships.
She said: 'One of the reasons you have friends is that you share personal things with each other and you talk them through.
'It's part of an alliance, a connection. And if you use AI for those sorts of purposes, will it not interfere with that relationship?'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
Your AI use could have a hidden environmental cost
Sign up for CNN's Life, But Greener newsletter. Our limited newsletter series guides you on how to minimize your personal role in the climate crisis — and reduce your eco-anxiety. Whether it's answering work emails or drafting wedding vows, generative artificial intelligence tools have become a trusty copilot in many people's lives. But a growing body of research shows that for every problem AI solves, hidden environmental costs are racking up. Each word in an AI prompt is broken down into clusters of numbers called 'token IDs' and sent to massive data centers — some larger than football fields — powered by coal or natural gas plants. There, stacks of large computers generate responses through dozens of rapid calculations. The whole process can take up to 10 times more energy to complete than a regular Google search, according to a frequently cited estimation by the Electric Power Research Institute. So, for each prompt you give AI, what's the damage? To find out, researchers in Germany tested 14 large language model (LLM) AI systems by asking them both free-response and multiple-choice questions. Complex questions produced up to six times more carbon dioxide emissions than questions with concise answers. In addition, 'smarter' LLMs with more reasoning abilities produced up to 50 times more carbon emissions than simpler systems to answer the same question, the study reported. 'This shows us the tradeoff between energy consumption and the accuracy of model performance,' said Maximilian Dauner, a doctoral student at Hochschule München University of Applied Sciences and first author of the Frontiers in Communication study published Wednesday. Typically, these smarter, more energy intensive LLMs have tens of billions more parameters — the biases used for processing token IDs — than smaller, more concise models. 'You can think of it like a neural network in the brain. The more neuron connections, the more thinking you can do to answer a question,' Dauner said. Complex questions require more energy in part because of the lengthy explanations many AI models are trained to provide, Dauner said. If you ask an AI chatbot to solve an algebra question for you, it may take you through the steps it took to find the answer, he said. 'AI expends a lot of energy being polite, especially if the user is polite, saying 'please' and 'thank you,'' Dauner explained. 'But this just makes their responses even longer, expending more energy to generate each word.' For this reason, Dauner suggests users be more straightforward when communicating with AI models. Specify the length of the answer you want and limit it to one or two sentences, or say you don't need an explanation at all. Most important, Dauner's study highlights that not all AI models are created equally, said Sasha Luccioni, the climate lead at AI company Hugging Face, in an email. Users looking to reduce their carbon footprint can be more intentional about which model they chose for which task. 'Task-specific models are often much smaller and more efficient, and just as good at any context-specific task,' Luccioni explained. If you are a software engineer who solves complex coding problems every day, an AI model suited for coding may be necessary. But for the average high school student who wants help with homework, relying on powerful AI tools is like using a nuclear-powered digital calculator. Even within the same AI company, different model offerings can vary in their reasoning power, so research what capabilities best suit your needs, Dauner said. When possible, Luccioni recommends going back to basic sources — online encyclopedias and phone calculators — to accomplish simple tasks. Putting a number on the environmental impact of AI has proved challenging. The study noted that energy consumption can vary based on the user's proximity to local energy grids and the hardware used to run AI partly why the researchers chose to represent carbon emissions within a range, Dauner said. Furthermore, many AI companies don't share information about their energy consumption — or details like server size or optimization techniques that could help researchers estimate energy consumption, said Shaolei Ren, an associate professor of electrical and computer engineering at the University of California, Riverside who studies AI's water consumption. 'You can't really say AI consumes this much energy or water on average — that's just not meaningful. We need to look at each individual model and then (examine what it uses) for each task,' Ren said. One way AI companies could be more transparent is by disclosing the amount of carbon emissions associated with each prompt, Dauner suggested. 'Generally, if people were more informed about the average (environmental) cost of generating a response, people would maybe start thinking, 'Is it really necessary to turn myself into an action figure just because I'm bored?' Or 'do I have to tell ChatGPT jokes because I have nothing to do?'' Dauner said. Additionally, as more companies push to add generative AI tools to their systems, people may not have much choice how or when they use the technology, Luccioni said. 'We don't need generative AI in web search. Nobody asked for AI chatbots in (messaging apps) or on social media,' Luccioni said. 'This race to stuff them into every single existing technology is truly infuriating, since it comes with real consequences to our planet.' With less available information about AI's resource usage, consumers have less choice, Ren said, adding that regulatory pressures for more transparency are unlikely to the United States anytime soon. Instead, the best hope for more energy-efficient AI may lie in the cost efficacy of using less energy. 'Overall, I'm still positive about (the future). There are many software engineers working hard to improve resource efficiency,' Ren said. 'Other industries consume a lot of energy too, but it's not a reason to suggest AI's environmental impact is not a problem. We should definitely pay attention.'
Yahoo
2 hours ago
- Yahoo
OpenAI's Sam Altman Shocked ‘People Have a High Degree of Trust in ChatGPT' Because ‘It Should Be the Tech That You Don't Trust'
OpenAI CEO Sam Altman made remarks on the first episode of OpenAI's new podcast regarding the degree of trust people have in ChatGPT. Altman observed, 'People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates. It should be the tech that you don't trust that much.' This candid admission comes at a time when AI's capabilities are still in their infancy. Billions of people around the world are now using artificial intelligence (AI), but as Altman says, it's not super reliable. Make Over a 2.4% One-Month Yield Shorting Nvidia Out-of-the-Money Puts Is Quantum Computing (QUBT) Stock a Buy on This Bold Technological Breakthrough? Is AMD Stock a Buy, Sell, or Hold on Untether AI Acquisition? Our exclusive Barchart Brief newsletter is your FREE midday guide to what's moving stocks, sectors, and investor sentiment - delivered right when you need the info most. Subscribe today! ChatGPT and similar large language models (LLMs) are known to 'hallucinate,' or generate plausible-sounding but incorrect or fabricated information. Despite this, millions of users rely on these tools for everything from research and work to personal advice and parenting guidance. Altman himself described using ChatGPT extensively for parenting questions during his son's early months, acknowledging both its utility and the risks inherent in trusting an AI that can be confidently wrong. Altman's observation points to a paradox at the heart of the AI revolution: while users are increasingly aware that AI can make mistakes, the convenience, speed, and conversational fluency of tools like ChatGPT have fostered a level of trust more commonly associated with human experts or close friends. This trust is amplified by the AI's ability to remember context, personalize responses, and provide help across a broad range of topics — features that Altman and others at OpenAI believe will only deepen as the technology improves. Yet, as Altman cautioned, this trust is not always well-placed. The risk of over-reliance on AI-generated content is particularly acute in high-stakes domains such as healthcare, legal advice, and education. While Altman praised ChatGPT's usefulness, he stressed the importance of user awareness and critical thinking, urging society to recognize that 'AI hallucinates' and should not be blindly trusted. The conversation also touched on broader issues of privacy, data retention, and monetization. As OpenAI explores new features — such as persistent memory and potential advertising products — Altman emphasized the need to maintain user trust by ensuring transparency and protecting privacy. The ongoing lawsuit with The New York Times over data retention and copyright has further highlighted the delicate balance between innovation, legal compliance, and user rights. On the date of publication, Caleb Naysmith did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. This article was originally published on
Yahoo
2 hours ago
- Yahoo
Media advisory - Minister Solomon to participate in Toronto Tech Week 2025
TORONTO, June 22, 2025 /CNW/ - The Honourable Evan Solomon, Minister of Artificial Intelligence and Digital Innovation and Minister responsible for the Federal Economic Development Agency for Southern Ontario, will participate in a series of events, meetings and visits with Canada's AI ecosystem and business leaders for Toronto Tech Week 2025. Minister Solomon to participate in a site visit at Xanadu Date: Monday, June 23, 2025 Time: 11:00 am (ET) Note: Members of the media are asked to contact ISED Media Relations at media@ to receive event location details and confirm their attendance. Stay connected Find more services and information on the Innovation, Science and Economic Development Canada website. Follow Innovation, Science and Economic Development Canada on social media.X (Twitter): @ISED_CA, Facebook: Canadian Innovation, Instagram: @cdninnovation, LinkedIn: Innovation, Science and Economic Development Canada SOURCE Innovation, Science and Economic Development Canada View original content: Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data