Latest news with #chatbots


Washington Post
4 hours ago
- Washington Post
6 tips to avoid using AI chatbots all wrong
Chabots like ChatGPT, Google Gemini and Claude can be great for brainstorming or helping with difficult writing like obituaries. But they're also a minefield of potential goofs and embarrassments. Just look at the publicly posted feed of conversations with the Meta AI chatbot. Some chatters seemed unaware that they were posting online their cringe-inducing dating questions, advice for tax evasion and a request for AI help finding a misplaced phone cord. Please don't do that, or any of these other AI mistakes: A special warning about the Meta AI chatbot app: There's a 'Share' button at the top right corner of your chat. If you hit that option and then 'Post,' your chat may be funneled to a Facebook-like public feed called Discover with a stream of everyone's AI conversations. Some people appear to accidentally be using Meta AI like a public personal diary, or butt dialing with the app. Be careful. (This week, Meta added a warning if you're about to post your AI chat online, though it didn't appear consistently in the app.) If you're intentionally posting your AI chats publicly — why? Ask yourself whether you'd post the same thing on your Facebook page. It's also not clear why Meta thought it was a good idea to create a stream of everyone's chatbot musings. Chatbots are designed to sound human and hold conversations that flow like a text gab fest with an old friend. Some 'companion' chatbots can role play as a romantic partner, including sexual conversations. But never forget that a chatbot is not your friend, lover or a substitute for human relationships. If you're lonely or uncertain in social situations, it's okay to banter or practice with AI. Just be sure to take those skills into the real world. You can also try asking a chatbot to recommend local meetups, organizations for people in your age group or similar life stage or advice on making personal connections. AI is so good at mimicking human chatter that scammers use it to strike up conversations to trick people into sending money. For safety, assume that anyone you meet only online is not who they say, particularly in romantic conversations or investment pitches. If you're falling for someone you've never met, stop and ask a family member or friend if anything seems off. Chatbots make things up constantly. They're also designed to be friendly and agreeable so you'll spend more time using them. The combination sometimes results in obsequious nonsense, like our Washington Post colleague who found OpenAI's ChatGPT invented passages from her own published columns and fabricated why that was happening. (The Post has a content partnership with OpenAI.) When these oddities happen to you, it helps to know the reason: These are stupid computer errors. AI companies could program their systems to respond, 'This chatbot can't access that information,' when you ask questions about essays, books or news articles that they aren't peering into. Instead, machines might act like a kid who has to give a book report but hasn't read the book: They fabricate details and then lie when you catch them making stuff up. An OpenAI spokesperson said the company is 'continuously working to improve the accuracy and reliability of our models,' and referred to an online disclosure about ChatGPT's errors. If you use a chatbot to help you write a flirty message to a dating app connection, a wedding toast or a cover letter for a job, people can tell when your words come verbatim from AI. (Or they can paste your text into an AI detector, although these technologies are flawed.) Roman Khaves, CEO of AI dating assistant Rizz, suggested treating chatbot text as a banal first draft. Rewrite the text to make it sound like you, including specific details or personal references. Most chatbots will use at least some information from your conversations to 'train' their AI, or they might save your information in ways you're not expecting. Niloofar Mireshghallah, an AI specialist and an incoming Carnegie Mellon University professor, was surprised that when you tap the thumbs-up or thumbs-down option to rate a chatbot reply for Anthropic's Claude, that starts a process of you consenting to the company saving your entire conversation for up to 10 years. Anthropic said it's transparent about this process in the feedback box and in online questions-and-answers. Before confiding in chatbots, imagine how you'd feel if the information you're typing were subpoenaed or leaked publicly. Mireshghallah said she's unnerved by the prospect of people working for chatbot companies reviewing your conversations, which she said happens sometimes. At minimum, Mireshghallah advised not entering into chatbots your personally identifiable or sensitive information like Social Security or passport numbers. (Use fake numbers if you need to.)


BBC News
a day ago
- BBC News
The best-case scenario for AI in schools
I'll be honest: I find the ways in which AI is changing our world to be a bit scary. It's getting harder to tell what's real and what's fake. It's unclear what jobs will exist in a few years. But more than anything, I worry about our kids – and whether a full-on embrace of AI could harm their ability to read deeply, write clearly and think critically. A lot of parents I know are talking about AI and education. They see the same headlines that I do: some students are using it to cheat, some teachers are using it to increase efficiency and some school districts are fully embracing it, even though we don't have a ton of reliable data on whether chatbots help or harm students' learning. So, amid my worries about AI in the classroom, I called up Sal Khan, author of Brave New Words. Many of you may know him as the founder of Khan Academy, an educational nonprofit that's grown into an empire of online videos and tools which many students (my own children included) use when they're struggling to understand a topic in class. I wanted to talk to him because he's one of the most prominent voices making an optimistic case for how AI could improve our classrooms in a meaningful way. He's not blind to the fears that many parents have, but hearing him make a positive case for this technology was eye-opening. I really enjoyed our conversation – if you have a moment, you should watch (or read) some more of it below. Below is an excerpt from our conversation, which has been edited for length and clarity. Katty Kay: For parents, there is quite a lot of fear that their kids' brains are going to get outsourced and that everything will be done by ChatGPT. Paint the picture of how you envision AI as an enabler in education. What are classrooms going to look like in 10 years time? Sal Khan: First of all, those fears are real. They're legitimate fears. What I always like to do before I go into what's going to happen with the technology is to think about what some of the goals of writing and reading were in the first place. I think if you talk to an English teacher, they'll say it's important to be able to communicate and structure your thoughts. I think when you break it out like that, you can start to think of ways to not only address some of the fears with AIs, but maybe even do things better than you did before. The example I'll give is one where I actually won't talk about technology at all. Imagine if your child's school district just discovered $1bn (£743m) and they decided to hire some amazing graduate students to hang out in the classroom. These graduate students are going to be on call for your teacher to help grade papers, to help bounce ideas and think of really creative lesson plans. When class starts, those grad students, along with the teacher, are going to be able to walk around and help your children when they need it. They don't have to wait for that help. And then, they'll report back to the teacher and say, 'Hey, I noticed Katty is not as engaged as she was yesterday' or 'Sal's really engaged today. Did you know that he's really into baseball? Let's make the next example about that just for Sal.' And then, they're able to distill all of that and communicate to the parents. It's not once a term. It's almost real time. I think that would be everyone's dream. The students would love it, the teachers would love it and parents would love it. And that's essentially what's going to happen with AI. Obviously, it's not going to be human teaching assistants; it's going to be artificial intelligences that are assisting the teachers that are able to observe the classroom and intervene while keeping the teacher in the loop. KK: We are talking about a world where AI takes over the roles of doctors and other jobs. So, why will a teacher's role in a classroom still even be something that we would seek to retain in a world where AI can do almost everything better? SK: I think we're in a world where we're going to be able to raise the floor and create a much better high-scale, low-cost, automated safety net for the world. Take your doctor example. If you're in a rural village in India, you'll hopefully get an AI doctor that maybe can even help prescribe medicines and things like that. It won't be as good as the doctors you or I might be able to go to, but it'll be a lot better than what they had before. Similarly, your children might be able to get access to an AI tutor or AI assessments. The reason why I don't think that is the end all and be all is the same reason why a lot of parents, including myself, feel the need to send their kids to a physical school with other kids and with a social environment, etc. We often focus a lot on just the standards of what happens in school: Can kids factor a polynomial? Can they grammatically correct a sentence? Those skills matter. But to some degree, the more important skills are: Can you deal with conflict? Can you be held accountable? Can you communicate? Can you know how to navigate social pressures? I think teachers, as a human being in the room, are going to be super-important actors as a physical human being to hold students accountable, but also just to be able to unlock that person-to-person connection. KK: Is it possible that because the tools will be so much better, we will unlock in all students that kind of joy of learning that most of us don't really feel when we're in middle school? SK: I think we'll do much, much better than we have in the past. I think the reason why most students disengage is because things are going over their head or it's not really connecting to their experiences in life. AI will get us a much better chance of personalising to those students. When you interact with content, you're much more likely to learn and remember the content. We have activities on our AI tool, Khanmigo, where you can talk to AI simulations of historical figures or literary characters. That literally brings history to life in ways that we couldn't have imagined before. To your question, about five, 10 years in the future, this sounds very Star Trek-y, but virtual-reality glasses are probably going to become mainstream in about 10 years. It literally would be like a magic school bus ride where the teacher is going to be able to take the class into the circulatory system, or we're going to be able to go to ancient Rome together. I think that will be a much, much richer way to learn. KK: So, is it that AI could actually enhance our ability not to learn for learning's sake, but that it could also make us more creative? Is that how you see this? SK: I think it will amplify whatever your intent already is. There are people who are just trying to do things as quickly as possible and cut corners. They will find ways to do that with AI. Now, those people usually aren't the highest performers and when you amplify that with AI, they still won't be the highest performers. But for those that are looking to do something novel and creative, I think it will amplify that, as well. I have a commencement address that I have to give and I am using AI – not to write the address, but I just dictated all my thoughts onto my phone and AI transcribed it. Then, I started tweaking it. I went paragraph by paragraph and was like, 'Is there another way of saying this?' I'm not using 99% of what the AI might suggest, but just having that partner there is very powerful. I'm also bouncing ideas off of my 16-year-old son and my wife. They're not always around! Imagine you're someone who gives great speeches, like Barack Obama. As president, he had an army of speechwriters. But I believe that he also came to the table with his own point of view. So, he was able to prompt those speechwriters so it would be in his voice, but also edit it himself so that it would truly be authentic to himself and his ideas. I think these technologies now give us all that power that President Obama had. But if you don't write well, if you don't communicate well, it's going to have diminishing returns. --


Al Jazeera
2 days ago
- Health
- Al Jazeera
Can ChatGPT be your therapist?
AI chatbots can reduce anxiety and depression, according to recent research. As chatbot therapy goes mainstream, can it replace a real therapeutic relationship?


CNA
3 days ago
- CNA
‘Won't get annoyed, won't snap': Indonesians tap AI for judgement-free emotional support, but risks abound
JAKARTA: Ahead of an extended family gathering, Nirmala (not her real name) found herself unusually anxious. The reason: Small talk that could spiral into interrogation. 'Sometimes I just don't know how to answer questions from relatives, and that stresses me out,' said Nirmala, 39, who asked to remain anonymous. In contrast, the generative artificial intelligence platform ChatGPT has been nothing but a source of comfort ever since Nirmala began using it as a sounding board last October. 'It's not that I don't have anyone to talk to,' Nirmala told CNA Indonesia. 'But when I bring up things that people think are trivial, I'm often told I'm being dramatic. So I talk to AI instead – at least it listens without throwing judgement.' Like Nirmala, overseas student Ila (not her real name) has turned to AI-driven chatbots for advice. Ila, 35, first turned to ChatGPT in April 2023 when she was preparing to move abroad for further studies. She later began also using Chinese AI platform DeepSeek. At first, Ila – who also requested anonymity – used the platforms for practical information about university life and daily routines in her host country, which she declined to reveal. 'Before leaving for school, I had a ton of questions about life abroad, especially since I had to bring my children with me. AI became one of the ways I could gain perspective, aside from talking directly with people who'd already been through it,' she said. The platforms' replies put her at such ease that in October last year, she began sharing her personal issues with the chatbots. NO JUDGEMENT FROM CHATBOTS AI chatbots have taken the world by storm in recent years and more people are turning to them for mental health issues. Indonesia is no different. An online survey in April by branding and data firm Snapcart found that 6 per cent of 3,611 respondents there are using AI "as a friend to talk to and share feelings with". Nearly six in 10 (58 per cent) of respondents who gave this answer said they would sometimes consider AI as a replacement for psychologists. People in Southeast Asia's largest economy are not necessarily turning to AI chatbots because they lack human friends, but because AI is available 24/7 and "listens" without judgement, users and observers told CNA Indonesia. The tool, they said, is especially handy in a country with a relatively low number of psychologists. According to the Indonesian Clinical Psychologists Association, the country has 4,004 certified clinical psychologists, of whom 3,084 are actively practising. With a population of about 280 million people, this translates to about 1.43 certified clinical psychologists per 100,000 population. In comparison, neighbouring Singapore has 9.7 psychologists per 100,000 population – a ratio that is already lower than in other Organisation for Economic Cooperation and Development nations. The potential benefits of using AI in mental health are clear, experts said, even as risks and the need for regulation exist. The rise of AI as a trusted outlet for emotional expression is closely tied to people's increasingly digital lives, said clinical psychologist Catarina Asthi Dwi Jayanti from Santosha Mental Health Centre in Bandung. AI conversations can feel more intuitive for those who grew up with texting and screens, she said, adding that at least a dozen clients have told her they have consulted AI. "For some people, writing is a way to organise their thoughts. AI provides that space, without the fear of being judged," she said. Conversing with ChatGPT is a safe way of rehearsing her thoughts before opening up to somebody close to her, Nirmala said. "Honestly it doesn't feel like I'm talking to a machine. It feels like a conversation with someone who gets me," she said. AI chatbots offer accessibility, anonymity, and speed, said telecommunications expert Heru Sutadi, executive director of the Indonesia ICT Institute. AI platforms, he said, are "programmed to be neutral and non-critical". "That's why users often feel more accepted, even if the responses aren't always deeply insightful," he said. Unlike a session with a psychologist, "you can access AI 24/7, often at little to no cost", Heru said. "Users can share as much as they want without the pressure of social expectations. And best of all, AI replies instantly." In Indonesia, an in-person session with a private psychologist can cost upwards of 350,000 rupiah (US$21.50). Popular telemedicine platform Halodoc offers psychiatrist consultations at prices starting from 70,000 rupiah, while mental health app Riliv offers online sessions with a psychologist at prices starting from 50,000 rupiah. Another advantage of a chatbot, said Ila, is that it "won't get annoyed, won't snap, won't have feelings about me bombarding it with a dozen questions". "That's not the case when you're talking to a real person," she added. As such, AI can serve as a "first safe zone" before someone seeks professional help, especially when dealing with topics such as sexuality, religion, trauma or family conflict, said Catarina. "The anonymity of the internet, and the comfort that comes with it, allows young people to open up without the fear of shame or social stigma," she explained. Some of her clients, she added, turned to AI because they "felt free to share without worrying what others, including psychologists, might think of them, especially if they feared being labelled as strange or overly emotional." RISKS AND IMPACT ON REAL-LIFE RELATIONSHIPS But mental health professionals are just as wary of the risks posed by AI chatbots, citing issues such as privacy, regulation of the technology and their impact on users' real-life interactions with others. The machines can offer a false sense of comfort, Heru said. "The perceived empathy and safety can be misleading. Users might think AI is capable of human warmth when, in reality, it's just an algorithm mimicking patterns." Another major concern is data privacy, Heru said. Conversations with AI are stored on company servers and if cyber breaches occur, "sensitive data could be leaked, misused for targeted advertising, profiling, or even sold to third parties". For its part, Open AI, ChatGPT's parent company, has said: "We do not actively collect personal information to train our models, do not use public internet data to profile individuals, target advertising, or sell user data." Indonesia released a National Strategy for Artificial Intelligence in 2020, but the document is non-binding. AI is currently governed loosely under the 2008 Electronic Information and Transactions (ITE) Law and the 2022 Personal Data Protection Law, both of which touch on AI but lack specificity. A Code of Ethics for AI was issued by the Ministry of Communication and Digital Affairs in 2023, but its guidelines remain vague. In January this year, Communication and Digital Affairs Minister Meutya Hafid announced comprehensive AI regulations would be rolled out. Studies are also emerging on the impact of chatbot usage on users' real-life social interactions. In a 2024 study involving 496 users of the chatbot Replika, researchers from China found that greater use of AI chatbots, and satisfaction with them, could negatively affect a person's real-life interpersonal skills and relationships. Child and adolescent clinical psychologist Lydia Agnes Gultom from Klinik Utama dr. Indrajana said AI-based relationships are inherently one-sided. Such interactions could hinder people's abilities to empathise, resolve conflicts, assert themselves, negotiate or collaborate, she said. "In the long run, this reduces exposure to genuine social interaction," said Agnes. In other countries, experts have highlighted the need for guardrails on the use of AI chatbots for mental health. As these platforms tend to align with and reinforce users' views, they may fail to challenge dangerous beliefs and could potentially drive vulnerable individuals to self-harm, the American Psychological Association told US regulators earlier this year. Safety features introduced by some companies, such as disclaimers that the chatbots are not "real people", are also inadequate, the experts said. AI can complement the work of mental health professionals, experts told CNA Indonesia. It can offer initial emotional support and a space for humans to share and explore their feelings with the right prompts, said Catarina of Santosha Mental Health Centre. But when it comes to diagnosis and grasping the complexity of human emotions, AI still falls short, she said. "It lacks interview (skills), observation and a battery of assessment tools." AI cannot provide proper intervention in emergency situations such as suicide ideation, panic attacks or abuse, said Agnes of Klinik Utama dr. Indrajana, a healthcare clinic in Jakarta. Therapeutic relationships rooted in trust, empathy, and nonverbal communication can only happen between humans, she added.
Yahoo
3 days ago
- Yahoo
How AI chatbots keep people coming back
Chatbots are increasingly looking to keep people chatting, using familiar tactics that we've already seen lead to negative consequences. Sycophancy can make AI chatbots respond in a way that's overly agreeable or flattering. And while having a digital hype person might not seem like a dangerous thing, it is actually a tactic used by tech companies to keep users talking with their bots and returning to their platforms.