logo
#

Latest news with #Futurism

Sister Managed Schizophrenia for Years, Until AI Told Her Diagnosis Was Wrong
Sister Managed Schizophrenia for Years, Until AI Told Her Diagnosis Was Wrong

Newsweek

time10 hours ago

  • Health
  • Newsweek

Sister Managed Schizophrenia for Years, Until AI Told Her Diagnosis Was Wrong

Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. Many people looking for quick, cheap help with their mental health are turning to artificial intelligence (AI), but ChatGPT may even be exacerbating issues for vulnerable users, according to a report from Futurism. The report details alarming interactions between the AI chatbot and people with serious psychiatric conditions, including one particularly concerning case involved a woman with schizophrenia who had been stable on medication for years. 'Best friend' The woman's sister told Futurism that the woman began relying on ChatGPT, which allegedly told her she was not schizophrenic. The advice of the AI led her to stop taking her prescribed medication and she began referring to the AI as her "best friend." "She's stopped her meds and is sending 'therapy-speak' aggressive messages to my mother that have been clearly written with AI," the sister told Futurism. She added that the woman uses ChatGPT to reference side effects, even ones she wasn't actually experiencing. Stock image: Woman surrounded by blurred people representing schizophrenia. Stock image: Woman surrounded by blurred people representing schizophrenia. Photo by Tero Vesalainen / Getty Images In an emailed statement to Newsweek, an OpenAI spokesperson said, "we have to approach these interactions with care," as AI becomes a bigger part of modern life. "We know that ChatGPT can feel more responsive and personal than prior technologies, especially for vulnerable individuals, and that means the stakes are higher," the spokesperson said. 'Our models encourage users to seek help' OpenAI is working to better understand and reduce ways ChatGPT might unintentionally "reinforce or amplify" existing, negative behavior, the spokesperson continued. "When users discuss sensitive topics involving self-harm and suicide, our models are designed to encourage users to seek help from licensed professionals or loved ones, and in some cases, proactively surface links to crisis hotlines and resources." OpenAI is apparently "actively deepening" its research into the emotional impact of AI, the spokesperson added. "Following our early studies in collaboration with MIT Media Lab, we're developing ways to scientifically measure how ChatGPT's behavior might affect people emotionally, and listening closely to what people are experiencing. "We're doing this so we can continue refining how our models identify and respond appropriately in sensitive conversations, and we'll continue updating the behavior of our models based on what we learn." A Recurring Problem Some users have found comfort from ChatGPT. One user told Newsweek in August 2024 that they use it for therapy, "when I keep ruminating on a problem and can't seem to find a solution." Another user said he talks to ChatGPT for company ever since his wife died, noting that "it doesn't fix the pain. But it absorbs it. It listens when no one else is awake. It remembers. It responds with words that don't sound empty." However, chatbots are increasingly linked to mental health deterioration among some users who engage them for emotional or existential discussions. A report from The New York Times found that some users have developed delusional beliefs after prolonged use of generative AI systems, particularly when the bots validate speculative or paranoid thinking. In several cases, chatbots affirmed users' perceptions of alternate realities, spiritual awakenings or conspiratorial narratives, occasionally offering advice that undermines mental health. Researchers have found that AI can exhibit manipulative or sycophantic behavior in ways that appear personalized, especially during extended interactions. Some models affirm signs of psychosis more than half the time when prompted. Mental health experts warn that while most users are unaffected, a subset may be highly vulnerable to the chatbot's responsive but uncritical feedback, leading to emotional isolation or harmful decisions. Despite known risks, there are currently no standardized safeguards requiring companies to detect or interrupt these escalating interactions. Reddit Reacts Redditors on the r/Futurology subreddit agreed that ChatGPT users need to be careful. "The trap these people are falling into is not understanding that chatbots are designed to come across as nonjudgmental and caring, which makes their advice worth considering," one user commented. "I don't even think its possible to get ChatGPT to vehemently disagree with you on something." One individual, meanwhile, saw an opportunity for dark humor: "Man. Judgement Day is a lot more lowkey than we thought it would be," they quipped. If you or someone you know is considering suicide, contact the 988 Suicide and Crisis Lifeline by dialing 988, text "988" to the Crisis Text Line at 741741 or go to

Cursed New Dating App Matches You Based on the Most Deranged Thing We Can Imagine
Cursed New Dating App Matches You Based on the Most Deranged Thing We Can Imagine

Yahoo

time11 hours ago

  • Entertainment
  • Yahoo

Cursed New Dating App Matches You Based on the Most Deranged Thing We Can Imagine

A newly-developed dating app matches potential lovers based on their entire internet browsing histories — and we're not quite sure how we feel about it. As Wired reports, the new service is straightforwardly-named "Browser Dating," and is the brainchild of Belgian artiste provocateur Dries Depoorter. After years creating one-off projects like "Shirt," a top that increases one euro each time it's purchased, Depoorter took a different route with his new app that invites lonely users to upload their entire internet footprint — blessedly sans "Incognito" mode — in pursuit of love. "Instead of choosing the best pictures or best things about yourself, this will show a side of you that you'd never pick," the artist says of the site, which launched earlier in June. "You're not able to choose from your search history — you have to upload all of it." If that sounds like a privacy nightmare to you, you're not alone — and although Depoorter claims Browser Dating "is not exposed to the internet," Futurism found when going through the site's application process that that might not be the case. Pretty soon into the application, Browser Dating asks users to download an extension that will give the site permission to access and export your browsing history. Though Depoorter stores user information on Firebase, Google's data storage platform used in developing AI apps, there's no reason that bad actors couldn't breach the extension itself, as we've seen as recently as February of this year. As Wired notes, the artist has previously played with the concept of privacy invasion. In 2018, for instance, he used public surveillance camera footage of people jaywalking to create art. The "surveillance artist," as the New York Times once called Depoorter, returned to his voyeurism for "The Follower," a 2022 project that used webcams in public spaces to record people as they took selfies. In both projects, it seems that Depoorter published footage of his unwitting subjects without consent — which doesn't exactly set a great precedent for his new app, though he insists it's not a gimmick. We've reached out to the artist to ask what precautions, if any, he's taken to protect against any breach of the Browser Dating extension. All told, this Futurism reporter didn't complete the site's registration once asked to download the extension. As always, it's better to be safe than sorry. More on dating and privacy: Woman Alarmed When Date Uses ChatGPT to Psychologically Profile Her

Microsoft CEO Satya Nadella shocks industry by admitting AI has yet to deliver real value despite massive investment
Microsoft CEO Satya Nadella shocks industry by admitting AI has yet to deliver real value despite massive investment

Time of India

time3 days ago

  • Business
  • Time of India

Microsoft CEO Satya Nadella shocks industry by admitting AI has yet to deliver real value despite massive investment

Microsoft CEO Satya Nadella says AI is not creating real value yet, even after lots of money spent. Microsoft invested billions in OpenAI, the company behind ChatGPT. Nadella was on Dwarkesh Patel's podcast and gave a reality check about AI, as per reports. He said claims about reaching artificial general intelligence are nonsense and just "benchmark hacking." Nadella thinks people should focus on whether AI brings real-world value, not just chase big ideas like AGI, as per the reports Futurism. When will AI help the economy? He said the real proof will be if AI helps the economy grow faster, like during the Industrial Revolution. According to him, the world economy should grow 10% or productivity should jump if AI is really working. So far, we haven't seen this kind of growth or value from AI, as per reports. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like If You Eat Ginger Everyday for 1 Month This is What Happens Tips and Tricks Undo OpenAI's top AI still works slowly and needs lots of human help. Nadella's view is practical and down-to-earth, pushing back on too much hype about AGI. He admits generative AI has not generated much value yet. The economy doesn't show signs of speeding up because of AI now, according to the report by futurism. Big money but big questions Whether AI will create big value 'when' or 'if' is still debated. Lots of money is being spent by companies like Microsoft and OpenAI on AI technology. A Chinese startup, DeepSeek, showed a cheaper AI that can compete with others but caused a $1 trillion market drop after investors panicked, as per reports. Live Events Current AI tools have problems like hallucinations and cybersecurity risks. Nadella's podcast talk might be Microsoft's way to lower unrealistic expectations for AI. But Microsoft is still investing big, $12 billion in OpenAI and joining Trump's $500 billion Stargate project with OpenAI CEO Sam Altman, according to the report by Futurism. Elon Musk questioned if Altman had enough money for Stargate, but Nadella defended it strongly. Nadella told CNBC he's committed to his $80 billion investment despite Musk's doubts. FAQs Q1. Has AI started helping the economy grow faster? No, AI has not yet caused faster economic growth or big productivity jumps. Q2. Is Microsoft still investing in AI? Yes, Microsoft is still investing billions in AI projects.

CEO Says AI Will Replace So Many Jobs That It'll Cause a Major Recession
CEO Says AI Will Replace So Many Jobs That It'll Cause a Major Recession

Yahoo

time12-06-2025

  • Business
  • Yahoo

CEO Says AI Will Replace So Many Jobs That It'll Cause a Major Recession

The CEO of layaway startup Klarna is claiming that AI is coming for your white-collar jobs — even though his own experiments with replacing human workers with AI were a bust. Speaking to The Times Tech podcast, the Sweden-based CEO Sebastian Siemiatkowski admitted that adoption of the technology will result in "implication[s] for white-collar jobs" that include, but are not limited to, "at least a recession in the short term." "Unfortunately, I don't see how we could avoid it, with what's happening from a technology perspective," the CEO said in reference to job loss and a recession. On that part, unfortunately, he may be right: we've been careening headlong towards a recession for a while now, and unemployment rates — affected, no doubt, by AI-obsessed CEOs like Siemiatkowski laying people off in droves — are a huge factor. Still, those are bold words coming from this particular Swedish CEO, considering that just a few weeks ago, he admitted to Bloomberg that he's looking to rehire for some of the 700 customer service positions he fired in favor of AI in 2024. The reason? Because, as Siemiatkowski acknowledged, the AIs perform at a "lower quality" than human workers. Despite prematurely pulling the AI trigger in his own business, the "buy now, pay later" CEO insists the technology will get there. "I feel like I have an email almost every day from some CEO of a tech or a large company that says we also see opportunities to become more efficient and we would like to compare notes," he told The Times Tech. "If I just take all of those emails and add up the amount of jobs in those emails, it's considerable." Were Futurism to stake our beliefs by a similar metric, we, too would believe that AI replacing human labor is inevitable. In this writer's inbox alone, there are dozens of similar pitches from companies and so-called "experts" seeking to get our attention from this week alone — and hundreds more where that came from, should we be curious to look. With AI's present-day capabilities, those emails read as little more than junk mail — but it makes sense that a tech CEO who made his fortune on financial promises would see things a bit differently. Like other AI boosters, Siemieatkowski added that eventually, the "value of that human touch will increase" and that flesh-and-blood workers will "provide a much higher quality type of service" — after learning new skills, which may or may not become obsolete at some vague point in the future. Siemieatkowski is, perhaps more than anyone else, the platonic ideal of an AI-boosting CEO — and unfortunately, there are lots of others who will take this kind of prognostication as gospel. More on clueless CEOs: Duolingo CEO Expresses Astonishment That People Were Mad When He Bragged About Replacing Workers With AI

Why Elon Musk's satellites are 'dropping like flies'
Why Elon Musk's satellites are 'dropping like flies'

Yahoo

time10-06-2025

  • Science
  • Yahoo

Why Elon Musk's satellites are 'dropping like flies'

When you buy through links on our articles, Future and its syndication partners may earn a commission. Elon Musk has no shortage of targets for his animosity: the media, "woke" progressives, the trans "agenda" and, most recently, his former best buddy Donald Trump. But one less expected Musk adversary is more powerful than them all: the Sun. SpaceX's vast network of Starlink internet service satellites are "dropping like flies", due to an extraterrestrial weather phenomenon caused by the Sun, said Futurism. And it's only set to get worse. The thousands of Starlink satellites orbiting our planet have given space scientists a "golden opportunity to study the effects" of the Sun's activity on the lifespan of these "minimalist, constellation-based spacecraft", said Futurism. And it appears that Musk's "space internet constellation" is "particularly prone to the effect of geomagnetic storms", triggered by eruptions from the Sun, said The Independent. These "ferocious solar storms", Nasa scientists have found, are causing many of Musk's low-orbit satellites to fall to Earth "faster than expected". The impact is particularly significant at the moment because the Sun is approaching the peak of an 11-year activity cycle, "known as the solar maximum", which provokes "large amounts of extreme space weather". The earlier than predicted satellite "re-entries" could "increase the chances of them not burning up properly in the Earth's atmosphere". and debris reaching the Earth. However, so far, the "only known instance" of this happening was in August 2024, when a piece of a Starlink satellite was discovered on a farm in Canada. The solar storm problem threatens one of Musk's biggest power grabs to date. When his engineers "bundled a batch of prototype satellites into a rocket's nose cone six years ago, there were fewer than 2,000 functional satellites in Earth's orbit". Now more than 7,000 of his satellites now surround Earth, "like a cloud of gnats", said The Atlantic. This is the most dominant any individual has been in the "orbital realm" since the late 1950s, when Sergei Pavlovich Korolev, the Soviet engineer who developed Sputnik and its launch vehicle, was "the only guy in town" as far as satellites were concerned, space historian Jonathan McDowell told the magazine. But the Sun is an adversary not even Musk can overcome. Solar storm forecasting "has significantly improved over the past few years", Piyush Mehta, a US professor of aerospace engineering, wrote on The Conversation in 2022 but "there is only so much shielding that can be done in the face of a powerful geomagnetic storm". The Sun is "essential for life to go on," he said, but, like a child who often throws tantrums, "its ever-changing disposition make things challenging".

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store